Is it possible to move the running pods from ReplicationController to a Deployment? Is it possible to move the running pods from ReplicationController to a Deployment? kubernetes kubernetes

Is it possible to move the running pods from ReplicationController to a Deployment?


Like, @matthew-l-daniel answered, the answer is yes. But I am more than 80% certain about it. Because I have tested it

Now whats the process we need to follow

Lets say I have a ReplicationController.

apiVersion: v1kind: ReplicationControllermetadata:  name: nginxspec:  replicas: 3  selector:    app: nginx  template:    metadata:      name: nginx      labels:        app: nginx    spec:      containers:      - name: nginx        image: nginx        ports:        - containerPort: 80

Question: can we move these running pods under Deployment?

Lets follow these step to see if we can.

Step 1: Delete this RC with --cascade=false. This will leave Pods.

Step 2:Create ReplicaSet first, with same label as ReplicationController

apiVersion: apps/v1beta2kind: ReplicaSetmetadata:  name: nginx  labels:    app: nginxspec:  replicas: 3  selector:    matchLabels:      app: nginx  template:    metadata:      labels:        app: nginx    spec:      ---

So, now these Pods are under ReplicaSet.

Step 3:Create Deployment Now with same label.

apiVersion: apps/v1beta2kind: Deploymentmetadata:  name: nginx  labels:    app: nginxspec:  replicas: 3  selector:    matchLabels:      app: nginx  template:    metadata:      labels:        app: nginx    spec:      ----

And Deployment will find one ReplicaSet already exists and our job is done.

Now we can check increasing replicas to see if it works.

And It works.

Which way It doesn't work

After deleting ReplicationController, do not create Deployment directly. This will not work. Because, Deployment will find no ReplicaSet, and will create new one with additional label which will not match with your existing Pods


I'm about 80% certain the answer is yes, since they both use Pod selectors to determine whether new instances should be created. The key trick is to use the --cascade=false (the default is true) in kubectl delete, whose help even speaks to your very question:

--cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.

By deleting the ReplicationController but not its subordinate Pods, they will continue to just hang out (although be careful, if a reboot or other hazard kills one or all of them, no one is there to rescue them). Creating the Deployment with the same selector criteria and a replicas count equal to the number of currently running Pods should cause a "no action" situation.

I regret that I don't have my cluster in front of me to test it, but I would think a small nginx RC with replicas=3 should be a simple enough test to prove that it behaves as you wish.