持久卷声明未绑定: "nfs-pv-provisioning-demo"。

3
我正在使用单节点创建一个kubernetes实验室,学习如何设置kubernetes nfs。我正在按照以下链接中kubernetes nfs示例的步骤进行操作: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs 尝试第一部分NFS服务器部分时,执行了三个命令:
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml

我遇到了一个问题,我看到了以下事件:

PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"

翻译为:研究完成:

https://github.com/kubernetes/kubernetes/issues/43120

https://github.com/kubernetes/examples/pull/30

以上链接都没有帮助我解决我遇到的问题。我已经确定它正在使用图像0.8。

Image:        gcr.io/google_containers/volume-nfs:0.8

有人知道这个信息的含义吗?非常感谢提供故障排除的线索和指引。谢谢。

$ docker version

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:41:23 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:49 2017
 OS/Arch:      linux/amd64
 Experimental: false


$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}


$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
lab-kube-06   Ready     master    2m        v1.8.3


$ kubectl describe nodes lab-kube-06
Name:               lab-kube-06
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=lab-kube-06
                    node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Thu, 16 Nov 2017 16:51:28 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.0.6
  Hostname:    lab-kube-06
Capacity:
 cpu:     2
 memory:  8159076Ki
 pods:    110
Allocatable:
 cpu:     2
 memory:  8056676Ki
 pods:    110
System Info:
 Machine ID:                 e198b57826ab4704a6526baea5fa1d06
 System UUID:                05EF54CC-E8C8-874B-A708-BBC7BC140FF2
 Boot ID:                    3d64ad16-5603-42e9-bd34-84f6069ded5f
 Kernel Version:             3.10.0-693.el7.x86_64
 OS Image:                   Red Hat Enterprise Linux Server 7.4 (Maipo)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://Unknown
 Kubelet Version:            v1.8.3
 Kube-Proxy Version:         v1.8.3
ExternalID:                  lab-kube-06
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                   ------------  ----------  ---------------  -------------
  kube-system                etcd-lab-kube-06                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-lab-kube-06             250m (12%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-lab-kube-06    200m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-545bc4bfd4-gmdvn              260m (13%)    0 (0%)      110Mi (1%)       170Mi (2%)
  kube-system                kube-proxy-68w8k                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-lab-kube-06             100m (5%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-7zlbg                        20m (1%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  830m (41%)    0 (0%)      110Mi (1%)       170Mi (2%)
Events:
  Type    Reason                   Age                From                     Message
  ----    ------                   ----               ----                     -------
  Normal  Starting                 39m                kubelet, lab-kube-06     Starting kubelet.
  Normal  NodeAllocatableEnforced  39m                kubelet, lab-kube-06     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    39m (x8 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  39m (x8 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    39m (x7 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasNoDiskPressure
  Normal  Starting                 38m                kube-proxy, lab-kube-06  Starting kube-proxy.



$ kubectl get pvc
NAME                       STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pv-provisioning-demo   Pending                                                      14s


$ kubectl get events
LAST SEEN   FIRST SEEN   COUNT     NAME                                        KIND                    SUBOBJECT   TYPE      REASON                    SOURCE                        MESSAGE
18m         18m          1         lab-kube-06.14f79f093119829a                Node                                Normal    Starting                  kubelet, lab-kube-06          Starting kubelet.
18m         18m          8         lab-kube-06.14f79f0931d0eb6e                Node                                Normal    NodeHasSufficientDisk     kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasSufficientDisk
18m         18m          8         lab-kube-06.14f79f0931d1253e                Node                                Normal    NodeHasSufficientMemory   kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasSufficientMemory
18m         18m          7         lab-kube-06.14f79f0931d131be                Node                                Normal    NodeHasNoDiskPressure     kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasNoDiskPressure
18m         18m          1         lab-kube-06.14f79f0932f3f1b0                Node                                Normal    NodeAllocatableEnforced   kubelet, lab-kube-06          Updated Node Allocatable limit across pods
18m         18m          1         lab-kube-06.14f79f122a32282d                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
17m         17m          1         lab-kube-06.14f79f1cdfc4c3b1                Node                                Normal    Starting                  kube-proxy, lab-kube-06       Starting kube-proxy.
17m         17m          1         lab-kube-06.14f79f1d94ef1c17                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
14m         14m          1         lab-kube-06.14f79f4b91cf73b3                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
58s         11m          42        nfs-pv-provisioning-demo.14f79f766cf887f2   PersistentVolumeClaim               Normal    FailedBinding             persistentvolume-controller   no persistent volumes available for this claim and no storage class is set
14s         4m           20        nfs-server-kq44h.14f79fd21b9db5f9           Pod                                 Warning   FailedScheduling          default-scheduler             PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
4m          4m           1         nfs-server.14f79fd21b946027                 ReplicationController               Normal    SuccessfulCreate          replication-controller        Created pod: nfs-server-kq44h
                                       2m

$ kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
nfs-server-kq44h   0/1       Pending   0          16s


$ kubectl get pods

NAME               READY     STATUS    RESTARTS   AGE
nfs-server-kq44h   0/1       Pending   0          26s


$ kubectl get rc

NAME         DESIRED   CURRENT   READY     AGE
nfs-server   1         1         0         40s


$ kubectl describe pods nfs-server-kq44h

Name:           nfs-server-kq44h
Namespace:      default
Node:           <none>
Labels:         role=nfs-server
Annotations:    kubernetes.io/created-

by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"5653eb53-caf0-11e7-ac02-000d3a04eb...
Status:         Pending
IP:
Created By:     ReplicationController/nfs-server
Controlled By:  ReplicationController/nfs-server
Containers:
  nfs-server:
    Image:        gcr.io/google_containers/volume-nfs:0.8
    Ports:        2049/TCP, 20048/TCP, 111/TCP
    Environment:  <none>
    Mounts:
      /exports from mypvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-plgv5 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  mypvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-pv-provisioning-demo
    ReadOnly:   false
  default-token-plgv5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-plgv5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  39s (x22 over 5m)  default-scheduler  PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"

你能分享一下 'kubectl get pv' 的结果吗?据我所知,你需要有一个持久卷或存储类资源。如果你没有 PV,那么 PVC 将无法被认领。 - Suresh Vishnoi
$ kubectl get pv 未找到资源。运行第一个命令后,我看到状态处于挂起状态 $ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml - Makin
你只需要创建一个持久卷。 - Suresh Vishnoi
1个回答

4
每个 Persistent Volume Claim (PVC) 都需要一个可以绑定的 Persistent Volume (PV)。在您的示例中,您只创建了 PVC,但没有创建卷本身。
PV 可以手动创建,也可以使用提供程序的 Volume class 自动创建。请查看 静态和动态配置的文档 以获取更多信息:

PV 的配置有两种方式:静态或动态。

静态

集群管理员创建了一些 PV。它们携带了可供集群用户使用的实际存储的详细信息。[...]

动态

当管理员创建的静态 PV 都不匹配用户的 PersistentVolumeClaim 时,集群可能会尝试为 PVC 动态分配一个卷。这种配置基于 StorageClasses:PVC 必须请求一个类,并且管理员必须已经创建和配置该类才能进行动态配置。

在你的示例中,你创建了一个存储类供应商(定义在examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml),看起来是为使用Google云而量身定制的(它可能无法在你的实验室设置中实际创建PV)。
你可以手动创建一个持久卷。创建PV后,PVC应自动绑定到卷并启动你的Pod。以下是一个持久卷的示例,它使用节点的本地文件系统作为卷(对于单节点测试设置来说可能没问题):
apiVersion: v1
kind: PersistentVolume
metadata:
  name: someVolume
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /path/on/host

对于生产环境,您可能希望在hostPath上选择不同的卷类型,尽管可用于您的卷类型将根据您所在的环境(云或自托管/裸机)而大大不同。


根据您的指导,我自己手动创建了一个持久卷。它可以工作。$ cat nfs-server-local-pv01.yamlapiVersion: v1 kind: PersistentVolume metadata: name: pv01 labels: type: local spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/tmp/data01"$ cat nfs-server-local-pvc01.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pv-provisioning-demo labels: demo: nfs-pv-provisioning spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi - Makin
谢谢 "helmbert"。 我已经提出了第二个问题,我正在尝试使同样的NFS实验室工作,在以下链接中: https://dev59.com/IOk5XIcBkEYKwwoY9-hg - Makin

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接