Kubernetes CNI vs Kube-proxy Kubernetes CNI vs Kube-proxy kubernetes kubernetes

Kubernetes CNI vs Kube-proxy


OVERLAY NETWORK

Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”).

All other Kubernetes networking stuff relies on the overlay networking working correctly.

There are a lot of overlay network backends (calico, flannel, weave) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities:

  1. Make sure your pods can send network requests outside your cluster
  2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed.

KUBE-PROXY

Just to understand kube-proxy, Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6)

  1. Every Kubernetes service gets an IP address (like 10.23.1.2)
  2. kube-dns resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2)
  3. kube-proxy sets up iptables rules in order to do random load balancing between them.

So when you make a request to my-svc.my-namespace.svc.cluster.local, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random.

In short, overlay networks define the underlying network which can be used for communicating the various component of kubernetes. While kube-proxy is a tool to generate the IP tables magic which let you connect to any of the pod(using servics) in kubernetes no matter on which node that pod exist.

Parts of this answer were taken from this blog:

https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/

Hope this gives you brief idea about kubernetes networking.


There are two kinds of IP in kubernetes: ClusterIP and Pod IP.

CNI

CNI cares about Pod IP.

CNI Plugin is focusing on building up an overlay network, without which Pods can't communicate with each other. The task of the CNI plugin is to assign Pod IP to the Pod when it's scheduled, and to build a virtual device for this IP, and make this IP accessable from every node of the cluster.

In Calico, this is implement by N host routes (N=the number of cali veth device) and M direct routes on tun0 (M=the number of K8s cluster nodes).

$ route -nKernel IP routing tableDestination     Gateway         Genmask         Flags Metric Ref    Use Iface0.0.0.0         10.130.29.1     0.0.0.0         UG    100    0        0 ens3210.130.29.0     0.0.0.0         255.255.255.0   U     100    0        0 ens3210.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 *10.244.0.137    0.0.0.0         255.255.255.255 UH    0      0        0 calid3c6b0469a610.244.0.138    0.0.0.0         255.255.255.255 UH    0      0        0 calidbc2311f51410.244.0.140    0.0.0.0         255.255.255.255 UH    0      0        0 califb4eac25ec610.244.1.0      10.130.29.81    255.255.255.0   UG    0      0        0 tunl010.244.2.0      10.130.29.82    255.255.255.0   UG    0      0        0 tunl0

In this case, 10.244.0.0/16 is the Pod IP CIDR, and 10.130.29.81 is a node in the cluster. You can imagine, if you have a TCP request to 10.244.1.141, it will be sent to 10.130.29.81 following the 7th rule. And on 10.130.29.81, there will be a route rule like this:

10.244.1.141    0.0.0.0         255.255.255.255 UH    0      0        0 cali4eac25ec62b

This will finally send the request to the correct Pod.

I'm not sure why a daemon is nessesary, I guess daemoned is to prevent the route rules it created from being deleted manually.

kube-proxy

kube-proxy's job is rather simple, it just redirect requests from Cluster IP to Pod IP.

kube-proxy has two mode, IPVS and iptables. If your kube-proxy is working on IPVS mode, you can see the redirect rules created by kube-proxy by running the following command on any node in the cluster:

ipvsadm -LnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  10.96.0.1:443 rr  -> 10.130.29.80:6443            Masq    1      6          0           -> 10.130.29.81:6443            Masq    1      1          0           -> 10.130.29.82:6443            Masq    1      0          0         TCP  10.96.0.10:53 rr  -> 10.244.0.137:53              Masq    1      0          0           -> 10.244.0.138:53              Masq    1      0          0   ...

In this case, you can see the default Cluster IP of CoreDNS 10.96.0.10, and behind it is two real server with Pod IP: 10.244.0.137 and 10.244.0.138.

This rule is what kube-proxy to create, and it's what kube-proxy created.

P.S. iptables mode is almost the same, but iptables rules looks ugly. I don't want to paste it here. :p


my 2 cents, correct me if not accurate

Kube-proxy control the K8s network communication,and the network is based on CNI plugin.

CNI plugin implement the CNI

CNI is overlay network for simplifing network communication