Kubernetes Pod 无法访问自己的服务。

3

概述

单节点集群中Pod无法访问其自身服务(超时)。

  • 操作系统为Debian 8
  • 云平台为DigitalOcean或AWS(在两者上都能重现)
  • Kubernetes版本为1.5.4
  • Kube代理使用iptables
  • Kubernetes手动安装
  • 不使用像weave或flannel等叠加网络

我已将服务更改为headless作为临时解决方案,但我想找到背后的真正原因。

GCP计算引擎节点上运行良好(!?)。可能会像这里建议的那样使用--proxy-mode=userspace正常工作here.

更多细节

服务

{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "creationTimestamp": "2017-04-13T05:29:18Z",
        "labels": {
            "name": "anon-svc"
        },
        "name": "anon-svc",
        "namespace": "anon",
        "resourceVersion": "280",
        "selfLink": "/api/v1/namespaces/anon/services/anon-svc",
        "uid": "23d178dd-200a-11e7-ba08-42010a8e000a"
    },
    "spec": {
        "clusterIP": "172.23.6.158",
        "ports": [
            {
                "name": "agent",
                "port": 8125,
                "protocol": "TCP",
                "targetPort": "agent"
            }
        ],
        "selector": {
            "name": "anon-svc"
        },
        "sessionAffinity": "None",
        "type": "ClusterIP"
    },
    "status": {
        "loadBalancer": {}
    }
}

Kube-proxy service (systemd)

[Unit]
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
ExecStart=/opt/kubernetes/bin/hyperkube proxy \
    --master=127.0.0.1:8080 \
    --proxy-mode=iptables \
    --logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

节点输出(在GCP上工作),DO(DigitalOcean上不起作用)。

$ iptables-save

GCP:

# Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017
*nat
:PREROUTING ACCEPT [4:364]
:INPUT ACCEPT [1:60]
:OUTPUT ACCEPT [7:420]
:POSTROUTING ACCEPT [19:1460]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-2UBKOACGE36HHR6Q - [0:0]
:KUBE-SEP-5LOF5ZUWMDRFZ2LI - [0:0]
:KUBE-SEP-5T3UFOYBS7JA45MK - [0:0]
:KUBE-SEP-YBFG2OLQ4DHWIGIM - [0:0]
:KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TF3HNH35HFDYKE6V - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.3:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.3:80
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2UBKOACGE36HHR6Q -s 10.142.0.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-2UBKOACGE36HHR6Q -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.142.0.10:6443
-A KUBE-SEP-5LOF5ZUWMDRFZ2LI -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-5LOF5ZUWMDRFZ2LI -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.4:53
-A KUBE-SEP-5T3UFOYBS7JA45MK -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-5T3UFOYBS7JA45MK -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.4:53
-A KUBE-SEP-YBFG2OLQ4DHWIGIM -s 172.17.0.3/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ
-A KUBE-SEP-YBFG2OLQ4DHWIGIM -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.3:8125
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001
-A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT
-A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V
-A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-5T3UFOYBS7JA45MK
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -j KUBE-SEP-2UBKOACGE36HHR6Q
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-2UBKOACGE36HHR6Q
-A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-5LOF5ZUWMDRFZ2LI
-A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-YBFG2OLQ4DHWIGIM
COMMIT
# Completed on Thu Apr 13 05:30:33 2017
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017
*filter
:INPUT ACCEPT [1250:625646]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1325:478496]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Thu Apr 13 05:30:33 2017

DO:

# Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017
*nat
:PREROUTING ACCEPT [1:52]
:INPUT ACCEPT [1:52]
:OUTPUT ACCEPT [13:798]
:POSTROUTING ACCEPT [13:798]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3VWUJCZC3MSW5W32 - [0:0]
:KUBE-SEP-CPJSBS35VMSBOKH6 - [0:0]
:KUBE-SEP-K7JQ5XSWBQ7MTKDL - [0:0]
:KUBE-SEP-WOG5WH7F5TFFOT4E - [0:0]
:KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TF3HNH35HFDYKE6V - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.4:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.4:80
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3VWUJCZC3MSW5W32 -s 67.205.156.80/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-3VWUJCZC3MSW5W32 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 67.205.156.80:6443
-A KUBE-SEP-CPJSBS35VMSBOKH6 -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-CPJSBS35VMSBOKH6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.3:53
-A KUBE-SEP-K7JQ5XSWBQ7MTKDL -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-K7JQ5XSWBQ7MTKDL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.3:53
-A KUBE-SEP-WOG5WH7F5TFFOT4E -s 172.17.0.4/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ
-A KUBE-SEP-WOG5WH7F5TFFOT4E -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.4:8125
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001
-A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V
-A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT
-A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-CPJSBS35VMSBOKH6
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -j KUBE-SEP-3VWUJCZC3MSW5W32
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-3VWUJCZC3MSW5W32
-A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-K7JQ5XSWBQ7MTKDL
-A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-WOG5WH7F5TFFOT4E
COMMIT
# Completed on Thu Apr 13 05:38:05 2017
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017
*filter
:INPUT ACCEPT [1127:469861]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1181:392136]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Thu Apr 13 05:38:05 2017

$ ip route show table local

GCP:

local 10.142.0.10 dev eth0  proto kernel  scope host  src 10.142.0.10
broadcast 10.142.0.10 dev eth0  proto kernel  scope link  src 10.142.0.10
broadcast 127.0.0.0 dev lo  proto kernel  scope link  src 127.0.0.1
local 127.0.0.0/8 dev lo  proto kernel  scope host  src 127.0.0.1
local 127.0.0.1 dev lo  proto kernel  scope host  src 127.0.0.1
broadcast 127.255.255.255 dev lo  proto kernel  scope link  src 127.0.0.1
broadcast 172.17.0.0 dev docker0  proto kernel  scope link  src 172.17.0.1
local 172.17.0.1 dev docker0  proto kernel  scope host  src 172.17.0.1
broadcast 172.17.255.255 dev docker0  proto kernel  scope link  src 172.17.0.1

DO:

broadcast 10.10.0.0 dev eth0  proto kernel  scope link  src 10.10.0.5
local 10.10.0.5 dev eth0  proto kernel  scope host  src 10.10.0.5
broadcast 10.10.255.255 dev eth0  proto kernel  scope link  src 10.10.0.5
broadcast 67.205.144.0 dev eth0  proto kernel  scope link  src 67.205.156.80
local 67.205.156.80 dev eth0  proto kernel  scope host  src 67.205.156.80
broadcast 67.205.159.255 dev eth0  proto kernel  scope link  src 67.205.156.80
broadcast 127.0.0.0 dev lo  proto kernel  scope link  src 127.0.0.1
local 127.0.0.0/8 dev lo  proto kernel  scope host  src 127.0.0.1
local 127.0.0.1 dev lo  proto kernel  scope host  src 127.0.0.1
broadcast 127.255.255.255 dev lo  proto kernel  scope link  src 127.0.0.1
broadcast 172.17.0.0 dev docker0  proto kernel  scope link  src 172.17.0.1
local 172.17.0.1 dev docker0  proto kernel  scope host  src 172.17.0.1
broadcast 172.17.255.255 dev docker0  proto kernel  scope link  src 172.17.0.1

$ ip addr show

在GCP上:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:01:0a:8e:00:0a brd ff:ff:ff:ff:ff:ff
    inet 10.142.0.10/32 brd 10.142.0.10 scope global eth0
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:d0:6d:28:52 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
5: veth1219894: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether a6:4e:d4:48:4c:ff brd ff:ff:ff:ff:ff:ff
7: vetha516dc6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ce:f2:e7:5d:34:d2 brd ff:ff:ff:ff:ff:ff
9: veth4a6b171: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ee:42:d4:d8:ca:d4 brd ff:ff:ff:ff:ff:ff

DO:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether da:74:7c:ad:9d:4d brd ff:ff:ff:ff:ff:ff
    inet 67.205.156.80/20 brd 67.205.159.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.10.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d874:7cff:fead:9d4d/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 76:66:0a:15:cb:a6 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:85:21:28:00 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:85ff:fe21:2800/64 scope link
       valid_lft forever preferred_lft forever
6: veth95a5fdf: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 12:2c:b9:80:6c:60 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::102c:b9ff:fe80:6c60/64 scope link
       valid_lft forever preferred_lft forever
8: veth3fd8422: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 56:98:c1:96:0c:83 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5498:c1ff:fe96:c83/64 scope link
       valid_lft forever preferred_lft forever
10: veth3984136: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ae:35:39:1c:bd:c1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ac35:39ff:fe1c:bdc1/64 scope link
       valid_lft forever preferred_lft forever

如果您需要更多信息,请告诉我。


如果没有使用Pod网络,你将会在调试时遇到很大的困难。为什么你没有使用呢?你是在使用Calico吗? - jaxxstorm
@jaxxstorm,我不使用Calico,这只是单节点设置,对于多节点集群,我使用kops。 - chingis
如果这是单个节点且没有叠加层,则我期望您可以独立于k8s设置访问容器。我不知道anon-svc的协议,但仅供从67.205.156.80主机进行测试,我希望curl-v 172.23.6.158:8125至少会尝试连接。类似于curl -v 172.17.0.4:8125(如iptables-save中所见实际发生的情况)。 - mdaniel
3个回答

2

"targetPort": "agent"

我认为这不是正常服务yaml中有效的样式,你能否将其更改为普通端口,例如8080,并重试。


1
这不是问题,在其他情况下运行正常。来自Kubernetes官方文档的内容:“更有趣的是,targetPort可以是一个字符串,指向后端Pod中的端口名称。” - chingis

0

你能分享一下你的部署,看一下svc是否有引用到它吗?确保svc中的选择器指向相同的内容,在你的情况下是名称或标签。


0

我已经发布了另一个问题,与云提供商无关的类似问题。当使用iptables作为代理模式时,这似乎是默认行为(无法访问自己的服务)。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接