Kubectl错误:对象已被修改,请将更改应用于最新版本并重试。

69

我在尝试应用补丁时遇到以下错误:

core@dgoutam22-1-coreos-5760 ~ $ kubectl apply -f ads-central-configuration.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"data":{"default":"{\"dedicated_redis_cluster\": {\"nodes\": [{\"host\": \"192.168.1.94\", \"port\": 6379}]}}"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"default\":\"{\\\"dedicated_redis_cluster\\\": {\\\"nodes\\\": [{\\\"host\\\": \\\"192.168.1.94\\\", \\\"port\\\": 6379}]}}\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2018-06-27T07:19:13Z\",\"labels\":{\"acp-app\":\"acp-discovery-service\",\"version\":\"1\"},\"name\":\"ads-central-configuration\",\"namespace\":\"acp-system\",\"resourceVersion\":\"1109832\",\"selfLink\":\"/api/v1/namespaces/acp-system/configmaps/ads-central-configuration\",\"uid\":\"64901676-79da-11e8-bd65-fa163eaa7a28\"}}\n"},"creationTimestamp":"2018-06-27T07:19:13Z","resourceVersion":"1109832","uid":"64901676-79da-11e8-bd65-fa163eaa7a28"}}
to:
&{0xc4200bb380 0xc420356230 acp-system ads-central-configuration ads-central-configuration.yaml 0xc42000c970 4434 false}
**for: "ads-central-configuration.yaml": Operation cannot be fulfilled on configmaps "ads-central-configuration": the object has been modified; please apply your changes to the latest version and try again**
core@dgoutam22-1-coreos-5760 ~ $ 

3
请分享.yaml文件的内容。 - aurelius
7个回答

93

你的 yaml 配置很可能是从生成的内容中复制粘贴而来,因此包含诸如 creationTimestamp(以及 resourceVersionselfLinkuid)等不属于声明性配置文件的字段。

仔细检查你的 yaml 并进行清理。删除那些特定实例的内容。你最终的 yaml 应该足够简单,让你容易理解。


54
从文件中删除这些行。
  creationTimestamp:   
  resourceVersion:  
  selfLink:   
  uid:   

然后再尝试申请一次。


6
你可能已经编辑过同一个导出的部署文件..
1 - 尝试使用以下方式重新导出它:
kubectl get deployment <DEPLOYMENT-NAME> -o yaml > deployment-file.yaml

2 - 在 "deployment-file.yaml" 中进行必要的修改。

3 - 使用以下命令应用更改:

kubectl apply -f deployment-file.yaml

或者:

你可能想要直接编辑部署.. 使用:

kubectl edit deployment <DEPLOYMENT-NAME> -o yaml

如果你不熟悉 VI 编辑器,请更改默认编辑器: export EDITOR=nano


6

注意在更新中添加最后一个resourceVersion,这样可以使其正常运行:

kubectl get deployment <DEPLOYMENT-NAME> -o yaml | grep resourceVersion

2

这个错误是由于deployment.yaml中有一个resourceVersion的条目引起的。删除它,因为它不需要,并且您将能够应用新的配置。


2

我能在我的测试环境中重现这个问题。重现步骤如下:

  1. Kubernetes Engine > 工作负载 > 部署 创建一个部署。
  2. 输入您的应用程序名称、命名空间和标签。
  3. 选择集群或创建新的集群。

您可以在这里查看 YAML 文件,这里是示例:

---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
  name: "nginx-1"
  namespace: "default"
  labels:
    app: "nginx-1"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: "nginx-1"
  template:
    metadata:
      labels:
        app: "nginx-1"
    spec:
      containers:
      - name: "nginx"
        image: "nginx:latest"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
  name: "nginx-1-hpa"
  namespace: "default"
  labels:
    app: "nginx-1"
spec:
  scaleTargetRef:
    kind: "Deployment"
    name: "nginx-1"
    apiVersion: "apps/v1"
  minReplicas: 1
  maxReplicas: 5
  metrics:
  - type: "Resource"
    resource:
      name: "cpu"
      targetAverageUtilization: 80

部署后,如果您进入 Kubernetes Engine > 工作负载 > nginx-1(单击它)

a.) 您将获得部署详细信息(概述、详细信息、修订历史记录、事件、YAML)
b.) 单击 YAML 并从 YAML 选项卡中复制内容
c.) 创建新的 YAML 文件并粘贴内容并保存文件
d.) 现在,如果您运行命令 $kubectl apply -f newyamlfile.yaml,它将显示以下错误:

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-09-17T21:34:39Z\",\"generation\":1,\"labels\":{\"app\":\"nginx-1\"},\"name\":\"nginx-1\",\"namespace\":\"default\",\"resourceVersion\":\"218884\",\"selfLink\":\"/apis/apps/v1/namespaces/default/deployments/nginx-1\",\"uid\":\"f41c5b6f-d992-11e9-9adc-42010a80023b\"},\"spec\":{\"progressDeadlineSeconds\":600,\"replicas\":3,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"app\":\"nginx-1\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"app\":\"nginx-1\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:latest\",\"imagePullPolicy\":\"Always\",\"name\":\"nginx\",\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":3,\"conditions\":[{\"lastTransitionTime\":\"2019-09-17T21:34:47Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"MinimumReplicasAvailable\",\"status\":\"True\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-09-17T21:34:39Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"ReplicaSet \\\"nginx-1-7b4bb7fbf8\\\" has successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":3,\"replicas\":3,\"updatedReplicas\":3}}\n"},"generation":1,"resourceVersion":"218884"},"spec":{"replicas":3},"status":{"availableReplicas":3,"observedGeneration":1,"readyReplicas":3,"replicas":3,"updatedReplicas":3}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-1", Namespace: "default"
Object: &{map["apiVersion":"apps/v1" "metadata":map["name":"nginx-1" "namespace":"default" "selfLink":"/apis/apps/v1/namespaces/default/deployments/nginx-1" "uid":"f41c5b6f-d992-11e9-9adc-42010a80023b" "generation":'\x02' "labels":map["app":"nginx-1"] "annotations":map["deployment.kubernetes.io/revision":"1"] "resourceVersion":"219951" "creationTimestamp":"2019-09-17T21:34:39Z"] "spec":map["replicas":'\x01' "selector":map["matchLabels":map["app":"nginx-1"]] "template":map["metadata":map["labels":map["app":"nginx-1"] "creationTimestamp":<nil>] "spec":map["containers":[map["imagePullPolicy":"Always" "name":"nginx" "image":"nginx:latest" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":"25%" "maxSurge":"25%"]] "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258'] "status":map["observedGeneration":'\x02' "replicas":'\x01' "updatedReplicas":'\x01' "readyReplicas":'\x01' "availableReplicas":'\x01' "conditions":[map["message":"Deployment has minimum availability." "type":"Available" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z" "lastTransitionTime":"2019-09-17T21:34:47Z" "reason":"MinimumReplicasAvailable"] map["lastTransitionTime":"2019-09-17T21:34:39Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"nginx-1-7b4bb7fbf8\" has successfully progressed." "type":"Progressing" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z"]]] "kind":"Deployment"]}
for: "test.yaml": Operation cannot be fulfilled on deployments.apps "nginx-1": the object has been modified; please apply your changes to the latest version and try again

为了解决这个问题,你需要找到准确的yaml文件,然后按照你的要求进行编辑,之后你可以运行$kubectl apply -f nginx-1.yaml
希望这些信息对你有所帮助。

1
我在尝试将新的Kubernetes密钥应用于旧的Kubernetes集群时遇到了这个错误。
当我应用新的Kubernetes密钥时,我收到了下面的警告:
警告:资源secrets/myapp-tls缺少kubectl.kubernetes.io/last-applied-configuration注释,该注释是kubectl apply所必需的。kubectl apply只能用于通过kubectl create --save-config或kubectl apply以声明方式创建的资源。缺少的注释将自动修补。 secret/myapp-tls已配置
当我检查Kubernetes集群时,我看到了以下错误:
无法更新端点 操作无法完成,因为对象已被修改;请将您的更改应用到最新版本并重试
以下是我解决问题的方法:
显然,问题是因为密钥文件包含诸如creationTimestamp(以及resourceVersion、selfLink和uid)等字段,这些字段不属于声明性配置文件,就像Roman's answer上面所述的那样。
为了快速解决问题,我只是删除了现有/旧的Kubernetes秘钥,并重新应用了新的。这次它正常工作了。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接