Kubernetes kube-apiserver无法通过ClusterIP(:443)从节点/Pod访问。

5
我是一名有用的助手,可以为您翻译文本。
我刚接触k8s,正在尝试在Vagrant(Ubuntu 16.04)中从头开始运行3个节点的群集(1个主节点+2个工作节点)(v1.9.6),没有任何自动化。我认为对于像我这样的初学者来说,这是获得实践经验的正确方式。老实说,我已经花了超过一周的时间,感到绝望。
我的问题是coredns pod(与kube-dns相同)无法通过ClusterIP连接到kube-apiserver。它看起来像这样:
vagrant@master-0:~$ kubectl get svc --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP         2d
kube-system   kube-dns     ClusterIP   10.0.30.1    <none>        53/UDP,53/TCP   2h

vagrant@master-0:~$ kubectl logs coredns-5c6d9fdb86-mffzk -n kube-system
E0330 15:40:45.476465       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:319: Failed to list *v1.Namespace: Get https://10.0.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
E0330 15:40:45.478241       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:312: Failed to list *v1.Service: Get https://10.0.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
E0330 15:40:45.478289       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:314: Failed to list *v1.Endpoints: Get https://10.0.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout

同时,我可以从任何机器和内部的pod(使用busybox进行测试)ping通10.0.0.1,但curl无法正常工作。
主节点
接口
br-e468013fba9d Link encap:Ethernet  HWaddr 02:42:8f:da:d3:35
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

docker0   Link encap:Ethernet  HWaddr 02:42:d7:91:fd:9b
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

enp0s3    Link encap:Ethernet  HWaddr 02:74:f2:80:ad:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3521 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2116 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:784841 (784.8 KB)  TX bytes:221888 (221.8 KB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:45:ed:ec
          inet addr:192.168.0.1  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe45:edec/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:322839 errors:0 dropped:0 overruns:0 frame:0
          TX packets:329938 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:45879993 (45.8 MB)  TX bytes:89279972 (89.2 MB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:249239 errors:0 dropped:0 overruns:0 frame:0
          TX packets:249239 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:75677355 (75.6 MB)  TX bytes:75677355 (75.6 MB)

iptables

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-e468013fba9d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-e468013fba9d -j DOCKER
-A FORWARD -i br-e468013fba9d ! -o br-e468013fba9d -j ACCEPT
-A FORWARD -i br-e468013fba9d -o br-e468013fba9d -j ACCEPT
-A DOCKER-ISOLATION -i br-e468013fba9d -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o br-e468013fba9d -j DROP
-A DOCKER-ISOLATION -j RETURN
-A DOCKER-USER -j RETURN

路由

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    0      0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 enp0s3
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-e468013fba9d
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 enp0s8

kube-apiserver (docker-compose)

version: '3'
services:
  kube_apiserver:
    image: gcr.io/google-containers/hyperkube:v1.9.6
    restart: always
    network_mode: host
    container_name: kube-apiserver
    ports:
      - "8080"
    volumes:
      - "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem"
      - "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem"
      - "/var/lib/kubernetes/kubernetes.pem:/var/lib/kubernetes/kubernetes.pem"
      - "/var/lib/kubernetes/kubernetes-key.pem:/var/lib/kubernetes/kubernetes-key.pem"
    command: ["/usr/local/bin/kube-apiserver",
              "--admission-control", "Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota",
              "--advertise-address", "192.168.0.1",
              "--etcd-servers", "http://192.168.0.1:2379,http://192.168.0.2:2379,http://192.168.0.3:2379",
              "--insecure-bind-address", "127.0.0.1",
              "--insecure-port", "8080",
              "--kubelet-https", "true",
              "--service-cluster-ip-range", "10.0.0.0/16",
              "--allow-privileged", "true",
              "--runtime-config", "api/all",
              "--service-account-key-file", "/var/lib/kubernetes/ca-key.pem",
              "--client-ca-file", "/var/lib/kubernetes/ca.pem",
              "--tls-ca-file", "/var/lib/kubernetes/ca.pem",
              "--tls-cert-file", "/var/lib/kubernetes/kubernetes.pem",
              "--tls-private-key-file", "/var/lib/kubernetes/kubernetes-key.pem",
              "--kubelet-certificate-authority", "/var/lib/kubernetes/ca.pem",
              "--kubelet-client-certificate", "/var/lib/kubernetes/kubernetes.pem",
              "--kubelet-client-key", "/var/lib/kubernetes/kubernetes-key.pem"]

kube-controller-manager (docker-compose)

version: '3'
services:
  kube_controller_manager:
    image: gcr.io/google-containers/hyperkube:v1.9.6
    restart: always
    network_mode: host
    container_name: kube-controller-manager
    ports:
      - "10252"
    volumes:
      - "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem"
      - "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem"
    command: ["/usr/local/bin/kube-controller-manager",
              "--allocate-node-cidrs", "true",
              "--cluster-cidr", "10.10.0.0/16",
              "--master", "http://127.0.0.1:8080",
              "--port", "10252",
              "--service-cluster-ip-range", "10.0.0.0/16",
              "--leader-elect", "false",
              "--service-account-private-key-file", "/var/lib/kubernetes/ca-key.pem",
              "--root-ca-file", "/var/lib/kubernetes/ca.pem"]

kube-scheduler (docker-compose)

version: '3'
services:
  kube_scheduler:
    image: gcr.io/google-containers/hyperkube:v1.9.6
    restart: always
    network_mode: host
    container_name: kube-scheduler
    ports:
      - "10252"
    command: ["/usr/local/bin/kube-scheduler",
              "--master", "http://127.0.0.1:8080",
              "--port", "10251"]

工作线程0

接口

br-c5e101440189 Link encap:Ethernet  HWaddr 02:42:60:ba:c9:81
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

cbr0      Link encap:Ethernet  HWaddr ae:48:89:15:60:fd
          inet addr:10.10.0.1  Bcast:10.10.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a406:b0ff:fe1d:1d85/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1149 errors:0 dropped:0 overruns:0 frame:0
          TX packets:409 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:72487 (72.4 KB)  TX bytes:35650 (35.6 KB)

enp0s3    Link encap:Ethernet  HWaddr 02:74:f2:80:ad:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3330 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2269 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:770147 (770.1 KB)  TX bytes:246770 (246.7 KB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:07:69:06
          inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe07:6906/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:268762 errors:0 dropped:0 overruns:0 frame:0
          TX packets:258080 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:48488207 (48.4 MB)  TX bytes:25791040 (25.7 MB)

flannel.1 Link encap:Ethernet  HWaddr 86:8e:2f:c4:98:82
          inet addr:10.10.0.0  Bcast:0.0.0.0  Mask:255.255.255.255
          inet6 addr: fe80::848e:2fff:fec4:9882/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2955 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2955 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:218772 (218.7 KB)  TX bytes:218772 (218.7 KB)

vethe5d2604 Link encap:Ethernet  HWaddr ae:48:89:15:60:fd
          inet6 addr: fe80::ac48:89ff:fe15:60fd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

iptables

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-N DOCKER-USER
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -m comment --comment "kubernetes forward rules" -j KUBE-FORWARD
-A FORWARD -s 10.0.0.0/16 -j ACCEPT
-A FORWARD -d 10.0.0.0/16 -j ACCEPT
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

路由

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    0      0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 enp0s3
10.10.0.0       0.0.0.0         255.255.255.0   U     0      0        0 cbr0
10.10.1.0       10.10.1.0       255.255.255.0   UG    0      0        0 flannel.1
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-c5e101440189
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 enp0s8

kubelet (systemd 服务)

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
#After=docker.service
#Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet \
  --allow-privileged=true \
  --anonymous-auth=false \
  --authorization-mode=AlwaysAllow \
  --cloud-provider= \
  --cluster-dns=10.0.30.1 \
  --cluster-domain=cluster.local \
  --node-ip=192.168.0.2 \
  --pod-cidr=10.10.0.0/24 \
  --kubeconfig=/var/lib/kubelet/kubeconfig \
  --runtime-request-timeout=15m \
  --hostname-override=worker0 \
#  --read-only-port=10255 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --tls-cert-file=/var/lib/kubelet/worker0.pem \
  --tls-private-key-file=/var/lib/kubelet/worker0-key.pem
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

kube-proxy (systemd-service)

[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
#After=docker.service
#Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --cluster-cidr=10.10.0.0/16 \
  --kubeconfig=/var/lib/kube-proxy/kubeconfig \
  --v=5
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Worker1的配置与worker0非常相似。

如果需要额外的信息,请告诉我。


https://github.com/kelseyhightower/kubernetes-the-hard-way - mon
我一路上一直在看这个手册。当你不在云环境中时,网络部分有点不同。我正在尝试在Vagrant中设置一个集群,我的问题在于网络。 - Roman T.
如果网络是问题,那么列出所有的网络配置和连接测试如何?例如,sysctl net.ipv4.ip_forward 被设置为 1,... - mon
curl -ivk https://10.0.0.1:443 和 curl -ivk https://192.168.0.1:443 分别输出什么?kubelet 和 API 服务器日志呢? - mon
只是澄清一下,10.0.0.1不是NAT地址,这是正确的吗? - mon
@mon net.ipv4.ip_forward 最初设置为1。 从节点上运行 curl -ivk 10.0.0.1:443 结果显示 _正在尝试连接..._。而 curl -ivk 192.168.0.1:443 返回 _未经授权_(我没有传递证书,所以没问题)。 10.0.0.1 只是我为服务保留的子网中的第一个IP地址(10.0.0.0/16)。根据惯例,我假设将第一个IP地址分配给默认的 "kubernetes" 服务。 在默认命名空间下运行 kubectl get svc -n default 返回 _default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4d_。 - Roman T.
4个回答

1
根据kube-apiserver文档:
--bind-address ip     The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
--secure-port int     The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 6443)

据我所见,您的kube-apiserver配置中没有定义--bind-address和--secure-port标志,因此默认情况下kube-apiserver侦听0.0.0.0:6443上的https连接。为了解决您的问题,只需将--secure-port标志添加到kube-apiserver配置中即可。
"--secure-port", "443",

谢谢你的帮助,尽管它仍然不能按预期工作。我已经在kube-apiserver中添加了--secure-port,还更改了kubelet和kube-proxy的kubeconfig中的URL,并重新启动了所有内容。情况仍然是一样的 - dns pod无法访问kube-apiserver,节点上的curl也无法工作。 - Roman T.

0

更改自:

--service-cluster-ip-range", "10.0.0.0/16

到:

--service-cluster-ip-range", "10.10.0.0/16

为了使--service-cluster-ip-range的值与Flannel CIDR匹配。


0

当我遇到这个问题(kube-apiserver无法通过ClusterIP访问)时,在kubeadm init期间分离以下两个网络有所帮助:

--pod-network-cidr=10.0.5.0/24 --service-cidr=10.0.96.0/24

...这也将它们都放在了我的网络内(10.0.0.0/16),但并未覆盖默认的服务CIDR (10.96.0.0/12)。

版本:Kubernetes 1.22

来源:https://github.com/coredns/coredns/issues/3704


0
请确保您的apiserver pod所在的主机已设置iptables以接受您的pod的CIDR范围,例如:
-A INPUT -s 10.32.0.0/12 -j ACCEPT

我认为这与在同一主机上访问服务时,iptables不使用翻译地址作为源地址有关。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接