Ip addressing of pods in Kubernetes Ip addressing of pods in Kubernetes kubernetes kubernetes

Ip addressing of pods in Kubernetes


A pod is part of a cluster (group of nodes), and cluster networking tells you that:

In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address.

This means that containers within a Pod can all reach each other’s ports on localhost.
This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM.
This is called the “IP-per-pod” model.

The constraints are:

  • all containers can communicate with all other containers without NAT
  • all nodes can communicate with all containers (and vice-versa) without NAT
  • the IP that a container sees itself as is the same IP that others see it as

See more with "Networking with Kubernetes" from Alok Kumar Singh:

https://cdn-images-1.medium.com/max/1000/1*lAfpMbHRf266utcd4xmLjQ.gif

Here:

We have a machine, it is called a node in kubernetes.
It has an IP 172.31.102.105 belonging to a subnet having CIDR 172.31.102.0/24.

(CIDR: Classless Inter-Domain Routing, a method for allocating IP addresses and IP routing)

The node has an network interface eth0 attached. It belongs to root network namespace of the node.
For pods to be isolated, they were created in their own network namespaces — these are pod1 n/w ns and pod2 n/w ns.
The pods are assigned IP addresses 100.96.243.7 and 100.96.243.8 from the CIDR range 100.96.0.0/11.

For the, see "Kubernetes Networking" from CloudNativelabs:

Kubernetes does not orchestrate setting up the network and offloads the job to the CNI (Container Network Interface) plug-ins. Please refer to the CNI spec for further details on CNI specification.

Below are possible network implementation options through CNI plugins which permits pod-to-pod communication honoring the Kubernetes requirements:

  • layer 2 (switching) solution
  • layer 3 (routing) solution
  • overlay solutions

layer 2 (switching)

https://cloudnativelabs.github.io/img/l2-network.jpg

You can see their IP attributed as part of a container subnet address range.

layer 3 (routing)

https://cloudnativelabs.github.io/img/l3-gateway-routing.jpg

This is about populating the default gateway router with routes for the subnet as shown in the diagram.
Routes to 10.1.1.0/24 and 10.1.2.0/24 are configured to be through node1 and node2 respectively.

overlay solutions

Generally not used.

Note: See also (Oct. 2018): "Google Kubernetes Engine networking".


Kubernetes creates a network within your network for the containers. in GKE, for example, by default it is a /14, but can be overwritten by a user with a range between /11 and /19.

When Kubernetes creates a pod, it assigns an IP address from these range. Now, you can't have another VM, not part of your cluster, in your network, with the same IP address that a pod has.

Why? Imagine, you have a VPN tunnel that needs to deliver a packet to an address that both, the pod and the VM are using. Who is it going to deliver to?

So, answering your question; no, it is not a virtual IP, it is a physical IP address from your network.