Kafka inaccessible once inside Kubernetes/Minikube Kafka inaccessible once inside Kubernetes/Minikube kubernetes kubernetes

Kafka inaccessible once inside Kubernetes/Minikube


The problem is to do with a bug in recent versions of minikube (see https://github.com/kubernetes/minikube/issues/1690).

The solution is simply:

minikube sshsudo ip link set docker0 promisc on


I know that this is an old post but I have had a similar "Leader not available" issue that I faced with almost a similar kafka + zookeeper deployment and service within minikube. The crux of this issue is that zookeeper is not able to talk to kafka within minikube.

The easiest fix for you here would be to revert to an earlier compatible version of minikube: v0.17.1 (that uses kubernetes version v1.5.3). If this is not an option for you, the possible workarounds that worked for me in minikube versions v0.19.0, v0.21.0 and v0.23.0 were the following for me:

  1. Force the minikube to start with a compatible working server version of kubernetes. The version v1.5.3 worked for me. I was facing this issue with minikube versions v0.21.0 (that uses kubernetes version v1.7.0) and v0.23.0 (that uses kubernetes version v1.7.0). To start minikube with a specific kubernetes version, use the command: minikube start --kubernetes-version v1.5.3
  2. Secondly, the issue was that I was using the KAFKA_ZOOKEEPER_CONNECT: zookeeper-service:2181 like you. Whilst this setting seemed to work fine if minikube was started with kubernetes client and server version v1.5.3 (i.e. using minikube version v0.17.1), this setting seemed to fail for kubernetes server version v1.7.0 (i.e. using minikube versions v0.21.0 and v0.23.0). The workaround was to expose the zookeeper service using type NodePort like so:

    apiVersion: v1kind: Servicemetadata:  labels:    app: zookeeper-service  name: zookeeper-servicespec:  type: NodePort  ports:  - name: zookeeper-port    port: 2181    nodePort: 30181    targetPort: 2181  selector:    app: zookeeper

and updating the kafka-deployment.yml like so:

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: kafka  name: kafkaspec:  replicas: 1  template:    metadata:      labels:        app: kafka    spec:      containers:      - env:        - name: KAFKA_ADVERTISED_HOST_NAME          value: "192.168.99.100"        - name: KAFKA_ADVERTISED_PORT          value: "30123"        - name: KAFKA_BROKER_ID          value: "1"        - name: KAFKA_ZOOKEEPER_CONNECT          value: 192.168.99.100:30181        - name: KAFKA_CREATE_TOPICS          value: "demo:1:1"        image: wurstmeister/kafka        imagePullPolicy: Always        name: kafka        ports:        - containerPort: 9092

echo "Am I receiving this message?" | kafkacat -P -b 192.168.99.100:30123 -t demo

kafkacat -C -b 192.168.99.100:30123 -t demo

% Reached end of topic demo [0] at offset 0

Am I receiving this message?

Unfortunately, the latest minikube version v0.25.0 (that uses kubernetes version v1.9.0) does not support downgrading the kubernetes version and this workaround does not work if you are using the latest minikube version.

If anyone else finds a better solution on this issue, please do update this thread!

Thanks.