netstat showing foreign ports as kubernetes:port. What does this mean? netstat showing foreign ports as kubernetes:port. What does this mean? kubernetes kubernetes

netstat showing foreign ports as kubernetes:port. What does this mean?


That happens because of the way netstat renders output. It has nothing to do with actual Kubernetes.

I have Docker Desktop for Windows and it adds this to the hosts file:

# Added by Docker Desktop192.168.43.196 host.docker.internal192.168.43.196 gateway.docker.internal# To allow the same kube context to work on the host and the container:127.0.0.1 kubernetes.docker.internal# End of section

There is a record which maps 127.0.0.1 to kubernetes.docker.internal. When netstat renders its output, it resolves foreign address and it looks at the hosts file and sees this record. It says kubernetes and that is what you see in the console. You can try to change it to

127.0.0.1 tomato.docker.internal

With this, netstat will print:

  Proto  Local Address          Foreign Address        State  TCP    127.0.0.1:6940         tomato:6941            ESTABLISHED  TCP    127.0.0.1:6941         tomato:6940            ESTABLISHED  TCP    127.0.0.1:8080         tomato:40347           ESTABLISHED  TCP    127.0.0.1:8080         tomato:40348           ESTABLISHED  TCP    127.0.0.1:8080         tomato:40349           ESTABLISHED

So what actually happens is there are connections from localhost to localhost (netstat -b will show apps that create them). Nothing to do with Kubernetes.


It seems that Windows docker changed your hosts file.So, if you want to get rid of these connections, just comment out the corresponding lines in the hosts file.

The hosts file on Windows 10 is located in C:\Windows\System32\drivers\etc andthe records may look something like 127.0.0.1 kubernetes.docker.internal.I am pretty sure it will disrupt your docker service on Windows (yet, I am not an expert), so don't forget to uncomment these lines whenever you need to get the docker service back.


OK, it looks like your minikube instance is definitely deleted. Keep in mind that in Linux or other nix-based systems it is totally normal that many processes use network sockets to communicate between each other e.g. you will see many established connections with both local and foreign addresses set to localhost:

tcp        0      0 localhost:45402         localhost:2379          ESTABLISHEDtcp        0      0 localhost:45324         localhost:2379          ESTABLISHEDtcp        0      0 localhost:2379          localhost:45300         ESTABLISHEDtcp        0      0 localhost:45414         localhost:2379          ESTABLISHEDtcp        0      0 localhost:2379          localhost:45388         ESTABLISHEDtcp        0      0 localhost:40600         localhost:8443          ESTABLISHED

kubernetes in your case is nothing more than hostname of one of your machines/VMs/instances. Maybe the one on top of which you ran your minikube you called kubernetes and that's why this hostname appears currently in your active network connections. Basically it has nothing to do with running kubernetes cluster.

To make it clearer you may cat the content of your /etc/hosts file and look for the entry kubernetes. Then you can compare them with your network interfaces addresses (run ip -4 a). Most probably kubernetes entry in /etc/hosts is mapped to one of them.

Let me know if it clarifies your doubts.


EDIT:

I've reproduced it on Minikube on my linux instance and noticed exactly the same behaviour, but it looks like the ESTABLISHED connections are showing only after successfull minikube stop. After minikube delete they're gone. It looks like those connections indeed belong to various components of kubernetes, but for some reason are not terminated. Basically closing established network connections is responsibility of the application which creates them and it looks like for some reason minikube is not terminating them.

If you run:

sudo netstat -ntp ### important: it must be run as superuser

it shows additionally PID/Program name column in which you can see by which program specific connection was established. You will see a lot of ESTABLISHED network connections belonging to etcd and kube-apiserver.

First I try to reboot the whole instance. It obviously close all the connections but then I verified a few times and it looks like successfully performed minikube delete also closes all connections.

Additionally you may want to check available docker containers by running:

docker ps

or:

docker container ls

After stopping the minikube instance it still shows those containers and it looks like the reason why a lot of connections with certain kubernetes components** are still shown by netstat command.

However after minikube delete neither containers nor ESTABLISHED connections with kubernetes cluster components are available any more.