Horizontal Pod Autoscaler replicas based on the amount of the nodes in the cluster Horizontal Pod Autoscaler replicas based on the amount of the nodes in the cluster kubernetes kubernetes

Horizontal Pod Autoscaler replicas based on the amount of the nodes in the cluster


Have you tried usage of nodeSelector spec in daemonset yaml.So if you have nodeselector set in yaml and just before drain if you remove the nodeselector label value from the node the daemonset should scale down gracefully also same when you add new node to cluster label the node with custom value and deamonset will scale up.

This works for me so you can try this and confirm with Kops

First : Label all you nodes with a custom label you will always have on your cluster

Example:

kubectl label nodes k8s-master-1 mylabel=allow_demon_set  kubectl label nodes k8s-node-1 mylabel=allow_demon_setkubectl label nodes k8s-node-2 mylabel=allow_demon_setkubectl label nodes k8s-node-3 mylabel=allow_demon_set

Then to your daemon set yaml add node selector.

Example.yaml used as below : Note added nodeselctor field

apiVersion: apps/v1kind: DaemonSetmetadata:  name: fluentd-elasticsearch  labels:    k8s-app: fluentd-loggingspec:  selector:    matchLabels:      name: fluentd-elasticsearch  template:    metadata:      labels:        name: fluentd-elasticsearch    spec:      nodeSelector:        mylabel: allow_demon_set      tolerations:      - key: node-role.kubernetes.io/master        effect: NoSchedule      containers:      - name: fluentd-elasticsearch        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2        resources:          limits:            memory: 200Mi          requests:            cpu: 100m            memory: 200Mi        volumeMounts:        - name: varlog          mountPath: /var/log        - name: varlibdockercontainers          mountPath: /var/lib/docker/containers          readOnly: true      terminationGracePeriodSeconds: 30      volumes:      - name: varlog        hostPath:          path: /var/log      - name: varlibdockercontainers        hostPath:          path: /var/lib/docker/containers

So nodes are labeled as below

$ kubectl get nodes --show-labelsNAME           STATUS   ROLES    AGE   VERSION   LABELSk8s-master-1   Ready    master   9d    v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-1,kubernetes.io/os=linux,mylable=allow_demon_set,node-role.kubernetes.io/master=k8s-node-1     Ready    <none>   9d    v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-1,kubernetes.io/os=linux,mylable=allow_demon_setk8s-node-2     Ready    <none>   9d    v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-2,kubernetes.io/os=linux,mylable=allow_demon_setk8s-node-3     Ready    <none>   9d    v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-3,kubernetes.io/os=linux,mylable=allow_demon_set

Once you have correct yaml start the daemon set using it

$ kubectl create -f Example.yaml$ kubectl get all -o wideNAME                              READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATESpod/fluentd-elasticsearch-jrgl6   1/1     Running   0          20s   10.244.3.19   k8s-node-3     <none>           <none>pod/fluentd-elasticsearch-rgcm2   1/1     Running   0          20s   10.244.0.6    k8s-master-1   <none>           <none>pod/fluentd-elasticsearch-wccr9   1/1     Running   0          20s   10.244.1.14   k8s-node-1     <none>           <none>pod/fluentd-elasticsearch-wxq5v   1/1     Running   0          20s   10.244.2.33   k8s-node-2     <none>           <none>NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTORservice/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   9d    <none>NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR             AGE   CONTAINERS              IMAGES                                         SELECTORdaemonset.apps/fluentd-elasticsearch   4         4         4       4            4           mylable=allow_demon_set   20s   fluentd-elasticsearch   quay.io/fluentd_elasticsearch/fluentd:v2.5.2   name=fluentd-elasticsearch

Then before draining a node we can just remove the custom label from node and the pod-should scale down gracefully and then drain the node.

$ kubectl label nodes k8s-node-3 mylabel-

Check the daemonset and it should scale down

ubuntu@k8s-kube-client:~$ kubectl get all -o wideNAME                              READY   STATUS        RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATESpod/fluentd-elasticsearch-jrgl6   0/1     Terminating   0          2m36s   10.244.3.19   k8s-node-3     <none>           <none>pod/fluentd-elasticsearch-rgcm2   1/1     Running       0          2m36s   10.244.0.6    k8s-master-1   <none>           <none>pod/fluentd-elasticsearch-wccr9   1/1     Running       0          2m36s   10.244.1.14   k8s-node-1     <none>           <none>pod/fluentd-elasticsearch-wxq5v   1/1     Running       0          2m36s   10.244.2.33   k8s-node-2     <none>           <none>NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTORservice/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   9d    <none>NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR             AGE     CONTAINERS              IMAGES                                         SELECTORdaemonset.apps/fluentd-elasticsearch   3         3         3       3            3           mylable=allow_demon_set   2m36s   fluentd-elasticsearch   quay.io/fluentd_elasticsearch/fluentd:v2.5.2   name=fluentd-elasticsearch

Now again add the label to new node with same custom label when it is added to cluster and the deamonset will scale up

$ kubectl label nodes k8s-node-3 mylable=allow_demon_set

ubuntu@k8s-kube-client:~$ kubectl get all -o wideNAME                              READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATESpod/fluentd-elasticsearch-22rsj   1/1     Running   0          2s      10.244.3.20   k8s-node-3     <none>           <none>pod/fluentd-elasticsearch-rgcm2   1/1     Running   0          5m28s   10.244.0.6    k8s-master-1   <none>           <none>pod/fluentd-elasticsearch-wccr9   1/1     Running   0          5m28s   10.244.1.14   k8s-node-1     <none>           <none>pod/fluentd-elasticsearch-wxq5v   1/1     Running   0          5m28s   10.244.2.33   k8s-node-2     <none>           <none>NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTORservice/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   9d    <none>NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR             AGE     CONTAINERS              IMAGES                                         SELECTORdaemonset.apps/fluentd-elasticsearch   4         4         4       4            4           mylable=allow_demon_set   5m28s   fluentd-elasticsearch   quay.io/fluentd_elasticsearch/fluentd:v2.5.2   name=fluentd-elasticsearch

Kindly confirm if this what you want to do and works with kops