Publishing a message to Kafka running inside docker Publishing a message to Kafka running inside docker docker docker

Publishing a message to Kafka running inside docker


You will have to bind your docker container to local machine. This can be done by using docker run as:

docker run --rm -p 127.0.0.1:2181:2181 -p 127.0.0.1:9092:9092 -p 127.0.0.1:8081:8081 ....

Alternatively you can use docker run with bind IP:

docker run --rm -p 0.0.0.0:2181:2181 -p 0.0.0.0:9092:9092 -p 0.0.0.0:8081:8081 .....

If you want to make docker container routable on your network you can use:

docker run --rm -p <private-IP>:2181:2181 -p <private-IP>:9092:9092 -p <private-IP>:8081:8081 ....

Or finally you can go for not containerising your network interface by using:

docker run --rm -p 2181:2181 -p 9092:9092 -p 8081:8081 --net host ....


While I myself am facing the similar problem, I can try to explain this behavior.

Kafka producer will lookup for partition leader from Zookeeper, before publishing the record to the Topic. Zookeeper will be having the leader host entry as marked by the Kafka server, which is running inside of a Docker container.

Due to this fact, the IP marked by the server will be the Docker internal IP, instead of the host IP. Which of course will not be resolvable from the client machine and hence timing out.

A probable solution could be, is to have advertised.host.name set to the host IP of the Docker machine. However, this will introduce another problem (as I faced!)

Broker metadata fetch by the server will start failing now. This is because now the Zookeeper entry has the host IP, which is not resolvable from inside of the container. As a consequence any consumer application would now start getting LEADER_NOT_AVAILABLE warnings.

This is a deadlock situation and the solution depends mainly on the host resolution strategy employed. I would like to know how people would suggest to go about here.

Edit : Finally we used host networking [--net=host] and used node static IP to get around the problem.