How to force Pods/Deployments to Master nodes?

后端 未结 3 747
北恋
北恋 2020-12-28 09:05

I\'ve setup a Kubernetes 1.5 cluster with the three master nodes tainted dedicated=master:NoSchedule. Now I want to deploy the Nginx Ingress Controller on the Maste

相关标签:
3条回答
  •   tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
    
    0 讨论(0)
  • 2020-12-28 09:48

    A toleration does not mean that the pod must be scheduled on a node with such taints. It means that the pod tolerates such a taint. If you want your pod to be "attracted" to specific nodes you will need to attach a label to your dedicated=master tainted nodes and set nodeSelector in the pod to look for such label.

    Attach the label to each of your special use nodes:

    kubectl label nodes name_of_your_node dedicated=master
    

    Kubernetes 1.6 and above syntax

    Add the nodeSelector to your pod:

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: nginx-ingress-controller
      namespace: kube-system
      labels:
        kubernetes.io/cluster-service: "true"
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            k8s-app: nginx-ingress-lb
            name: nginx-ingress-lb
          annotations:
        spec:
          nodeSelector:
            dedicated: master
          tolerations:
          - key: dedicated
            operator: Equal
            value: master
            effect: NoSchedule
        […]
    

    If you don't fancy nodeSelector you can add affinity: under spec: instead:

    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            matchExpressions:
            - key: dedicated
              operator: Equal
              values: ["master"]
    

    Pre 1.6 syntax

    Add the nodeSelector to your pod:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx-ingress-controller
      namespace: kube-system
      labels:
        kubernetes.io/cluster-service: "true"
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            k8s-app: nginx-ingress-lb
            name: nginx-ingress-lb
          annotations:
            scheduler.alpha.kubernetes.io/tolerations: |
              [
                {
                  "key": "dedicated",
                  "operator": "Equal",
                  "value": "master",
                  "effect": "NoSchedule"
                }
              ]
        spec:
          nodeSelector:
            dedicated: master
        […]
    

    If you don't fancy nodeSelector you can also add an annotation like this:

    scheduler.alpha.kubernetes.io/affinity: >
      {
        "nodeAffinity": {
          "requiredDuringSchedulingIgnoredDuringExecution": {
            "nodeSelectorTerms": [
              {
                "matchExpressions": [
                  {
                    "key": "dedicated",
                    "operator": "Equal",
                    "values": ["master"]
                  }
                ]
              }
            ]
          }
        }
      }
    

    Keep in mind that NoSchedule will not evict pods that are already scheduled.

    The information above is from https://kubernetes.io/docs/user-guide/node-selection/ and there are more details there.

    0 讨论(0)
  • 2020-12-28 09:48

    you might want to dive into the Assigning Pods to Nodes documentation. Basically you should add some labels to your nodes with sth like this:

    kubectl label nodes <node-name> <label-key>=<label-value>
    

    and then reference that within your Pod specification like this:

    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      nodeSelector:
        label: value
    

    But I'm not sure if this works for non-critical addons when the specific node is tainted. More details could be found here

    0 讨论(0)
提交回复
热议问题