如何应用Kubernetes网络策略来限制一个命名空间对另一个命名空间的访问?

3

我是 Kubernetes 的新手。我有一个多租户的场景。

1)我有三个名称空间,如下所示:

 default,
 tenant1-namespace,
 tenant2-namespace

2) 命名空间 default 有两个数据库pod。

tenant1-db - listening on port 5432
tenant2-db - listening on port 5432

命名空间 tenant1-ns 有一个应用 Pod。

tenant1-app - listening on port 8085

tenant2-ns 命名空间拥有一个应用程序 Pod。

tenant2-app - listening on port 8085

3) 我在默认命名空间上应用了3个网络策略

a) 限制其他命名空间访问db pod的访问权限

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
b) 仅允许来自 tenant1-ns 的 tenant1-app 访问 tenant1-db pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-1
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant1-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant1-development
    - podSelector:
        matchLabels:
          app: tenant1-app

c)仅允许tenant2-ns中的tenant2-app访问tenant2-db pod

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-2
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant2-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant2-development
    - podSelector:
        matchLabels:
          app: tenant2-app

我希望将tenant1-db的访问限制在tenant1-app内部,将tenant2-db的访问限制在tenant2-app内部。但是目前看起来,tenant2-app可以访问tenant1-db,这是不应该发生的。

以下是tenant2-app的db-config.js文件。

module.exports = {
  HOST: "tenant1-db",
  USER: "postgres",
  PASSWORD: "postgres",
  DB: "tenant1db",
  dialect: "postgres",
  pool: {
    max: 5,
    min: 0,
    acquire: 30000,
    idle: 10000
  }
};

如您所见,我正在指向tenant2-app使用tenant1-db,我希望将tenant1-db限制为仅限于tenant1-app。需要在网络策略中进行哪些修改?
更新:
tenant1 deployment和服务yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 
kind: Deployment 
metadata: 
  name: tenant1-app-deployment
  namespace: tenant1-namespace 
spec: 
  selector: 
    matchLabels: 
      app: tenant1-app 
  replicas: 1 # tells deployment to run 1 pods matching the template 
  template: 
    metadata: 
      labels: 
        app: tenant1-app 
    spec: 
      containers: 
      - name: tenant1-app-container 
        image: tenant1-app-dock-img:v1 
        ports: 
        - containerPort: 8085 
--- 
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service  
kind: Service 
apiVersion: v1 
metadata: 
  name: tenant1-app-service
  namespace: tenant1-namespace  
spec: 
  selector: 
    app: tenant1-app 
  ports: 
  - protocol: TCP 
    port: 8085 
    targetPort: 8085 
    nodePort: 31005 
  type: LoadBalancer 

租户2应用程序的部署和服务YAML文件

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 
kind: Deployment 
metadata: 
  name: tenant2-app-deployment
  namespace: tenant2-namespace 
spec: 
  selector: 
    matchLabels: 
      app: tenant2-app 
  replicas: 1 # tells deployment to run 1 pods matching the template 
  template: 
    metadata: 
      labels: 
        app: tenant2-app 
    spec: 
      containers: 
      - name: tenant2-app-container 
        image: tenant2-app-dock-img:v1 
        ports: 
        - containerPort: 8085 
--- 
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service  
kind: Service 
apiVersion: v1 
metadata: 
  name: tenant2-app-service
  namespace: tenant2-namespace  
spec: 
  selector: 
    app: tenant2-app 
  ports: 
  - protocol: TCP 
    port: 8085 
    targetPort: 8085 
    nodePort: 31006 
  type: LoadBalancer 

更新 2 :

db-pod1.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        deployment.kubernetes.io/revision: "1"
      creationTimestamp: null
      generation: 1
      labels:
        k8s-app: tenant1-db
      name: tenant1-db
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: tenant1-db
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          creationTimestamp: null
          labels:
            k8s-app: tenant1-db
          name: tenant1-db
        spec:
          volumes:
          - name: tenant1-pv-storage
            persistentVolumeClaim:
              claimName: tenant1-pv-claim
          containers:
          - env:
            - name: POSTGRES_USER
              value: postgres
            - name: POSTGRES_PASSWORD
              value: postgres
            - name: POSTGRES_DB
              value: tenant1db
            - name: PGDATA
              value: /var/lib/postgresql/data/pgdata
            image: postgres:11.5-alpine
            imagePullPolicy: IfNotPresent
            name: tenant1-db
            volumeMounts:
            - mountPath: "/var/lib/postgresql/data/pgdata"
              name: tenant1-pv-storage
            resources: {}
            securityContext:
              privileged: false
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
status: {}

db-pod2.ymal

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: null
  generation: 1
  labels:
    k8s-app: tenant2-db
  name: tenant2-db
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: tenant2-db
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: tenant2-db
      name: tenant2-db
    spec:
      volumes:
      - name: tenant2-pv-storage
        persistentVolumeClaim:
          claimName: tenant2-pv-claim
      containers:
      - env:
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          value: postgres
        - name: POSTGRES_DB
          value: tenant2db
        - name: PGDATA
          value: /var/lib/postgresql/data/pgdata
        image: postgres:11.5-alpine
        imagePullPolicy: IfNotPresent
        name: tenant2-db
        volumeMounts:
        - mountPath: "/var/lib/postgresql/data/pgdata"
          name: tenant2-pv-storage
        resources: {}
        securityContext:
          privileged: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}

更新3:

kubectl get svc -n default
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE
kubernetes      ClusterIP      10.96.0.1        <none>           443/TCP          5d2h
nginx           ClusterIP      10.100.24.46     <none>           80/TCP           5d1h
tenant1-db   LoadBalancer   10.111.165.169   10.111.165.169   5432:30810/TCP   4d22h
tenant2-db   LoadBalancer   10.101.75.77     10.101.75.77     5432:30811/TCP   2d22h

kubectl get svc -n tenant1-namespace
NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP                               PORT(S)          AGE
tenant1-app-service   LoadBalancer   10.111.200.49   10.111.200.49                             8085:31005/TCP   3d
tenant1-db         ExternalName   <none>          tenant1-db.default.svc.cluster.local   5432/TCP         2d23h

kubectl get svc -n tenant2-namespace
NAME                  TYPE           CLUSTER-IP     EXTERNAL-IP                               PORT(S)          AGE
tenant1-db         ExternalName   <none>         tenant1-db.default.svc.cluster.local   5432/TCP         2d23h
tenant2-app-service   LoadBalancer   10.99.139.18   10.99.139.18                              8085:31006/TCP   2d23h

所以从tenant2-app到tenant1-db的连接正常工作了吗?请分享所有4个Pod YAML文件。 - Arghya Sadhu
@Arghya 我已经更新了问题。 - Developer Desk
@DeveloperDesk,你使用的是哪个CNI?并不是所有的CNI都支持NetworkPolicy。 - Anton Kostenko
我正在使用Cilium Kubernetes网络策略,因为我正在我的本地机器Ubuntu上尝试它,该机器拥有Minikube。 - Developer Desk
1个回答

3

引用自文档,让我们了解一下您为tenant2设置的以下策略。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-2
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant2-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: development
    - podSelector:
        matchLabels:
          app: tenant2-app

您定义的上述网络策略在表单数组中有两个元素,其表示允许来自本地(默认)命名空间中标签为app=tenant2-app的Pod或来自任何带有标签name=development的命名空间中的任何Pod的连接。

如果将规则合并为下面的单个规则,则应解决此问题。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-2
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant2-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant2-development
      podSelector:
        matchLabels:
          app: tenant2-app

上面的网络策略允许来自具有标签 app=tenant2-app 的 Pod 在拥有标签 name=tenant2-development 的命名空间中建立连接。

请向 tenant2-ns 命名空间添加一个 name=tenant2-development 标签。

对于 tenant1,做相同的练习如下:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-1
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant1-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant1-development
      podSelector:
        matchLabels:
          app: tenant1-app

tenant1-ns命名空间中添加标签name=tenant1-development

我尝试了上述方法,但是tenant2-app仍然可以访问tenant1-db。 - Developer Desk
你能在问题中分享两个数据库Pod的YAML文件吗? - Arghya Sadhu
看起来还好...你给租户命名空间添加了标签吗? - Arghya Sadhu
是的,我添加了标签。 - Developer Desk
我已经创建了类型为ExternalName的外部服务,以便从默认命名空间访问db pod。这会有问题吗?我已在问题中进行了更新。 - Developer Desk

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接