My kafka docker container cannot connect to my zookeeper docker container
There are multiple ways to do this. But before we look into it there are two problems in your approach that you need to understand
zookeper
host is not reachable when you usedocker run
as each of the containers is running in a different network isolationkafka
may start and try to connect tozookeeper
butzookeeper
is not ready yet
Solving the network issue
You can do a lot of things to fix things
use --net=host
to run both on the host network
use docker network create <name>
and then use --net=<name>
while launching both the containers
Or you can run your kafka container on the zookeeper
containers network.
use --net=container:zookeeper
when launching kafka
container. This will make sure zookeeper
host is accessible. This is not recommended as such, until unless you have some strong reason to do so. Because as soon as zookeeper
container goes down, so will be the network of your kafka
container. But for the sake of understanding, I have put this option here
Solving the startup race issue
Either you can keep a gap between starting zookeeper
and kafka
, to make sure that when kafka
starts zookeeper
is up and running
Another option is to use --restart=on-failure
flag with docker run. This will make sure the container is restarted on failure and will try to reconnect to zookeeper
and hopefully that time zookeeper
will be up.
Instead of using docker run
I would always prefer the docker-compose
to get such linked containers to be run. You can do that by creating a simple docker-compose.yml
file and then running it with docker-compsoe up
version: "3.4"services: zookeeper: image: confluent/zookeeper environment: - ZOOKEEPER_CLIENT_PORT=2181 kafka: image: confluent/kafka environment: - KAFKA_ADVERTISED_HOST_NAME=kafka - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 - KAFKA_CREATE_TOPICS=testtopic:1:1 depends_on: - zookeeper restart: on-failure
I'm running on Mac though, this is working fine. since 'host' networking not working in mac i just create a network called kafka_net
and put the containers there.
version: "3.4"services: zookeeper: image: confluent/zookeeper environment: - ZOOKEEPER_CLIENT_PORT=2181 networks: - kafka_net kafka: image: confluent/kafka environment: - KAFKA_ADVERTISED_HOST_NAME=kafka - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 depends_on: - zookeeper networks: - kafka_net restart: on-failurenetworks: kafka_net: driver: "bridge"
To make sure all working:
Log into the zookeeper container then
zookeeper-shell localhost:2181 => You should see something like 'Welcome to ZooKeeper!' after all the big chunk of text
Log into kafka container then
kafka-topics --zookeeper zookeeper:2181 --list # empty listkafka-topics --zookeeper zookeeper:2181 --create --topic first_topic --replication-factor 1 --partitions 1kafka-topics --zookeeper zookeeper:2181 --list # you will see the first_topickafka-console-producer --broker-list localhost:9092 --topic first_topic # type some text and ctrl + ckafka-console-consumer --bootstrap-server localhost:9092 --zookeeper zookeeper:2181 --topic first_topic --from-beginning # you will see the stuff you typed first_topic
If still giving problems have a look in the official examples. https://github.com/confluentinc/cp-docker-images/tree/5.2.2-post/examples and still giving issues post it, will give a hand.
Docker start containers in isolated network, called default bridge
unless you specify network explicitly.
You can succeed in different ways, here are 2 easiest:
Put containers into same user-defined bridge network
# create netdocker network create foodocker run --network=foo -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeperdocker run --network=foo --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
Expose ports and connect through localhost
docker run -p 2181:2181 -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeperdocker run --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=host.docker.internal:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
Note: in second approach you should use host.docker.internal
as a host name and expose (publish) port 2181
for first container to make it available on localhost