Ubuntu LTS 18.04上kubectl代理不起作用

4
我使用这篇文章在Ubuntu 18.04上安装了Kubernetes,一切都运行良好,然后我尝试使用这些说明安装Kubernetes仪表板。
但是,现在我尝试运行kubectl proxy时,仪表板没有出现,当我尝试使用默认的kubernetes-dashboard URL访问它时,浏览器会显示以下错误消息: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

以下命令输出以下内容,其中 kubernetes-dashboard 显示为 CrashLoopBackOff 状态: $> kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
default                amazing-app-rs-59jt9                         1/1     Running            5          23d
default                amazing-app-rs-k6fg5                         1/1     Running            5          23d
default                amazing-app-rs-qd767                         1/1     Running            5          23d
default                amazingapp-one-deployment-57dddd6fb7-xdxlp   1/1     Running            5          23d
default                nginx-86c57db685-vwfzf                       1/1     Running            4          22d
kube-system            coredns-6955765f44-nqphx                     0/1     Running            14         25d
kube-system            coredns-6955765f44-psdv4                     0/1     Running            14         25d
kube-system            etcd-master-node                             1/1     Running            8          25d
kube-system            kube-apiserver-master-node                   1/1     Running            42         25d
kube-system            kube-controller-manager-master-node          1/1     Running            11         25d
kube-system            kube-flannel-ds-amd64-95lvl                  1/1     Running            8          25d
kube-system            kube-proxy-qcpqm                             1/1     Running            8          25d
kube-system            kube-scheduler-master-node                   1/1     Running            11         25d
kubernetes-dashboard   dashboard-metrics-scraper-7b64584c5c-kvz5d   1/1     Running            0          41m
kubernetes-dashboard   kubernetes-dashboard-566f567dc7-w2sbk        0/1     CrashLoopBackOff   12         41m

$> kubectl get services --all-namespaces

NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   ----------      <none>        443/TCP                  25d
default                nginx                       NodePort    ----------    <none>        80:32188/TCP             22d
kube-system            kube-dns                    ClusterIP   ----------      <none>        53/UDP,53/TCP,9153/TCP   25d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   ----------   <none>        8000/TCP                 24d
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   ----------    <none>        443/TCP                  24d



$ kubectl get services --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   ======       <none>        443/TCP                  25d
default                nginx                       NodePort    ======    <none>        80:32188/TCP             22d
kube-system            kube-dns                    ClusterIP   ======      <none>        53/UDP,53/TCP,9153/TCP   25d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   ======   <none>        8000/TCP                 24d
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   ======    <none>        443/TCP                  24d

$ kubectl get events -n kubernetes-dashboard

LAST SEEN   TYPE      REASON    OBJECT                                      MESSAGE
24m         Normal    Pulling   pod/kubernetes-dashboard-566f567dc7-w2sbk   Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s       Warning   BackOff   pod/kubernetes-dashboard-566f567dc7-w2sbk   Back-off restarting failed container

$ kubectl describe services kubernetes-dashboard -n kubernetes-dashboard

$ kubectl描述服务kubernetes-dashboard,命名空间为kubernetes-dashboard。
Name:              kubernetes-dashboard
Namespace:         kubernetes-dashboard
Labels:            k8s-app=kubernetes-dashboard
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector:          k8s-app=kubernetes-dashboard
Type:              ClusterIP
IP:                10.96.241.62
Port:              <unset>  443/TCP
TargetPort:        8443/TCP
Endpoints:         
Session Affinity:  None
Events:            <none>

请执行以下命令以获取 kubernetes-dashboard 容器的日志:$ kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard

> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
> 
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
>         /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212

有什么建议可以解决这个问题吗?提前感谢。

处于 CrashLoopBackOff 状态的 Pod 的日志? - Arghya Sadhu
@ArghyaSadhu 和 kubectl logs kubernetes-dashboard-566f567dc7-w2sbk 一样 服务器错误(未找到):找不到名为“kubernetes-dashboard-566f567dc7-w2sbk”的 pod - Kundan
kubectl get events -n kubernetes-dashboard 和 kubectl describe services kubernetes-dashboard -n kubernetes-dashboard 分别表示什么? - Arghya Sadhu
kubectl get events - 在默认命名空间中未找到任何资源。 - Kundan
@CodeRunner 将命名空间添加到您的 kubectl 命令中 kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard - BinaryMonster
已更新原帖并附上了上述命令的输出细节。 - Kundan
1个回答

1
我注意到您用于安装Kubernetes集群的指南缺少一个重要部分。

根据Kubernetes文档:

For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.

Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here .

Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

For more information about flannel, see the CoreOS flannel repository on GitHub .

要解决这个问题:

我建议使用以下命令:

sysctl net.bridge.bridge-nf-call-iptables=1

然后重新安装flannel:
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

更新:验证了/proc/sys/net/bridge/bridge-nf-call-iptables的默认值为1ubuntu-18-04-lts的问题在于您需要本地访问仪表板。
如果您通过ssh连接到主节点,则可以尝试使用ssh的-X标志启动web浏览器,并通过ForwardX11来实现。幸运的是,ubuntu-18-04-lts默认已启用此功能。
ssh -X server

然后安装本地网页浏览器,例如 Chromium。
sudo apt-get install chromium-browser

chromium-browser

最后,从节点本地访问仪表板。
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

希望它有所帮助。

kubectl delete -f kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 报错:路径 "kubectl" 不存在。是否有拼写错误? - Kundan
确实。我已经编辑了我的答案。你可以尝试重新启动flannel pods,但我不确定那是否足够。如果工作节点没有显示在节点列表中,你可能还需要重新加入它们。如果发生这种情况,我会展示如何处理。 - Piotr Malec
再次更新答案。请尝试使用 sudo kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 命令。 - Piotr Malec
尝试在您的主节点上使用 curl http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ - Piotr Malec
我建议您参考官方 Kubernetes 文档。至于您提供的指南,我在使用 GCP Compute Engine VM 和 ubuntu-18-04-lts 镜像时一切正常。关于访问仪表板,必须从您所链接的节点本地访问。 - Piotr Malec
显示剩余3条评论

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接