launching multiple kafka broker fails launching multiple kafka broker fails kubernetes kubernetes

launching multiple kafka broker fails


As per this: Launching multiple Kafka brokers fails, it's an issue with log.dirs in your server.properties where it can't be the same for all your brokers or it can't be shared.

You can probably use the ${HOSTNAME##*-} bash environment setting to modify your container entrypoint script that in of itself modifies your server.properties before the start, but the downside of that is that you are going to have to rebuild your Docker image.

Another strategy using StatefulSets is described here: How to pass args to pods based on Ordinal Index in StatefulSets?. But you will also have to make changes on how the Kafka entrypoint is called.

You could also try using completely different volumes for each of your Kafka broker pods.


First you must see the server configuration in the server.properties file.

~/kafka_2.11-2.1.0/bin$ egrep -v '^#|^$' ../config/server.propertiesbroker.id=0num.network.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/tmp/kafka-logs...

Here you can see an attribute called log.dirs and a directory /tmp/kafka-logs as a value. Make sure that the directory has the right permissions for the user you are using to start the Kafka process.

~/kafka_2.11-2.1.0/bin$ ls -lrtd /tmp/kafka-logsdrwxr-xr-x 2 kafkauser kafkauser 4096 mar  1 08:26 /tmp/kafka-logs

Rremove all files under /tmp/kafka-logs

~/kafka_2.11-2.1.0/bin$ rm -fr /tmp/kafka-logs/*

And finally try again. Probably your problem is solved.