Readiness probe failed: MongoDB shell version v4.0.10 Readiness probe failed: MongoDB shell version v4.0.10 kubernetes kubernetes

Readiness probe failed: MongoDB shell version v4.0.10


I was observing the same error, once I've increased initialDelaySeconds: value to some higher number in Readiness probe spec inside deployment, the issue has gone and mongodb Pod has been spawned without error. Actually, it takes some time for docker to pull up docker.io/bitnami/mongodb image and initialize socket mongodb listener, therefore Readiness probe indicates issue while container process has not ready for network connections.

    readinessProbe:      exec:        command:        - mongo        - --eval        - db.adminCommand('ping')      failureThreshold: 6      initialDelaySeconds: 360      periodSeconds: 10      successThreshold: 1      timeoutSeconds: 5

Meanwhile, you can check mongodb Pod for any inbound connection statuses or any relevant event.

kubectl logs <mongodb-Pod-name>

I've used stable/mongodb Helm chart to deploy MongoDB and discovered the error similar to yours:

helm install --name mongodb stable/mongodb

Warning Unhealthy 38m kubelet, gke-helm-test-default-pool-efed557c-52tf Readiness probe failed: MongoDB shell version v4.0.9 connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb 2019-06-10T12:46:46.054+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:343:13 @(connect):2:6 exception: connect failed

When I made some adjustment and gain readinessProbe.initialDelaySeconds from 5 sec to 360 sec, mongodb container run up without any failures.

helm install --name mongodb stable/mongodb --set readinessProbe.initialDelaySeconds=360