Why don't two Kubernetes ReplicaSets with same selector conflict with each other? Why don't two Kubernetes ReplicaSets with same selector conflict with each other? kubernetes kubernetes

Why don't two Kubernetes ReplicaSets with same selector conflict with each other?


Labels are key/value pairs attached to the objects such as pods,deployment etc. Labels are used to identify and group the kubernetes resources.

According to the official documentation of kubernetes,

Unlike names and UUID labels do not provide uniqueness. In general, we expect many objects to carry the same labels.

Labels are not for uniqueness, labels are used to identify the group of objects which are related in some way so that you can list or watch those objects.

Let's take the example you mentioned in your question, that have two replicasets with 3 replica's each. Both replica's represent the labels app: nginx and version:1.7.9 or version:1.7.1

Now if you want to identify all the pods having labels app=nginx you can run following command:

kubectl get pods -l app=nginx

It will show you all 6 pods.

Now, If you want to identify the pods which have app=nginx as well as specific version of that nginx then you need to run following command:

kubectl get pods -l app=nginx,version=1.7.1

Now it will show you only three pods which have both the labels.

For more information read official docs on labels here


That's because two replica sets have two different .metadata.name values hence they both have their own isolated resources. The same behavior will be available even with deployment sets. Assuming that you name the two with different values, the two deployment sets would also spin up isolated pods with same labels.