Authentication mongo deployed on kubernetes Authentication mongo deployed on kubernetes mongodb mongodb

Authentication mongo deployed on kubernetes


The reason the environment variables don't work is that the MONGO_INITDB environment variables are used by the docker-entrypoint.sh script within the image ( https://github.com/docker-library/mongo/tree/master/4.0 ) however when you define a 'command:' in your kubernetes file you override that entrypoint (see notes https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ )

See below YML which is adapted from a few of the examples I found online. Note the learning points for me

  1. cvallance/mongo-k8s-sidecar looks for ANY mongo instance matching the POD labels REGARDLESS of namespace so it'll try to hook up with any old instance in the cluster. This caused me a few hours of headscratching as I'd removed the environment= labels from the example as we use namespaces to segregate our environments..silly and obvious in retrospect...extremely confusing in the beginning (mongo logs were throwing all sorts of authentication errors and service down type errors because of the cross talk)

  2. I was new to ClusterRoleBindings and it took me a while to realise they are Cluster level which I know seems obvious (despite needing to supply a namespace to get kubectl to accept it) but was causing mine to get overwritten between each namespace so make sure you create unique names per environment to avoid a deployment in one namespace messing up another as the ClusterRoleBinding gets overwritten if they're not unqiue within the cluster

  3. MONGODB_DATABASE needs to be set to 'admin' for authentication to work.

  4. I was following this example to configure authentication which depended on a sleep5 in the hope the daemon was up and running before attempting to create the adminUser. I found this wasn't long enough so upped it initially as failure to create the adminUser obviously led to connection refused issues. I later changed the sleep to test the daemon with a while loop and a ping of mongo which is more foolproof.

  5. If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set --wiredTigerCacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.

    1. You need at least 3 nodes in a Mongo cluster !

The YML below should spin up and configure a mongo replicaset in kubernetes with persistent storage and authentication enabled.If you connect into the pod...

kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash

mongo shell is installed in the image so you should be able to connect to the replicaset with...

mongo mongodb://mongoadmin:adminpassword@mongo-db/admin?replicaSet=rs0

And see that you get either rs0:PRIMARY> or rs0:SECONDARY, indicating the two pods are in a mongo replicateset. use rs.conf() to verify that from the PRIMARY.

#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD#so we can enable authenticationapiVersion: v1data:     #echo -n "mongoadmin" | base64    init.userid: bW9uZ29hZG1pbg==    #echo -n "adminpassword" | base64    init.password: YWRtaW5wYXNzd29yZA==kind: Secretmetadata:  name: mongo-init-credentials  namespace: somenamespacetype: Opaque---# Create a secret to hold a keyfile used to authenticate between replicaset members# this seems to need to be base64 encoded twice (might not be the case if this# was an actual file reference as per the examples, but we're using a simple key# hereapiVersion: v1data:  #echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64  mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYKkind: Secretmetadata:  name: mongo-key  namespace: somenamespacetype: Opaque---# Create a service account for Mongo and give it Pod List role# note this is a ClusterROleBinding - the Mongo Pod will be able# to list all pods present in the cluster regardless of namespace# (and this is exactly what it does...see below)apiVersion: v1kind: ServiceAccountmetadata:  name: mongo-serviceaccount  namespace: somenamespace---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: mongo-somenamespace-serviceaccount-view  namespace: somenamespacesubjects:- kind: ServiceAccount  name: mongo-serviceaccount  namespace: somenamespaceroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: pod-viewer---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: pod-viewer  namespace: somenamespacerules:- apiGroups: [""]  resources: ["pods"]  verbs: ["list"]---#Create a Storage Class for Google Container Engine#Note fstype: xfs isn't supported by GCE yet and the#Pod startup will hang if you try to specify it.kind: StorageClassapiVersion: storage.k8s.io/v1beta1metadata:  namespace: somenamespace  name: mongodb-ssd-storageprovisioner: kubernetes.io/gce-pdparameters:  type: pd-ssdallowVolumeExpansion: true---#Headless Service for StatefulSetsapiVersion: v1kind: Servicemetadata:  namespace: somenamespace  name: mongo-db  labels:    name: mongo-dbspec: ports: - port: 27017   targetPort: 27017 clusterIP: None selector:   app: mongo---# Now the fun part#apiVersion: apps/v1beta1kind: StatefulSetmetadata:  namespace: somenamespace  name: mongo-dbspec:  serviceName: mongo-db  replicas: 3  template:    metadata:      labels:        # Labels MUST match MONGO_SIDECAR_POD_LABELS        # and MUST differentiate between other mongo        # instances in the CLUSTER not just the namespace        # as the sidecar will search the entire cluster        # for something to configure        app: mongo        environment: somenamespace    spec:      #Run the Pod using the service account      serviceAccountName: mongo-serviceaccount      terminationGracePeriodSeconds: 10      #Prevent a Mongo Replica running on the same node as another (avoid single point of failure)      affinity:        podAntiAffinity:          requiredDuringSchedulingIgnoredDuringExecution:          - labelSelector:              matchExpressions:              - key: app                operator: In                values:                - mongo            topologyKey: "kubernetes.io/hostname"      containers:        - name: mongo          image: mongo:4.0.12          command:            #Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115            #in order to pass the new admin user id and password in          - /bin/sh          - -c          - >            if [ -f /data/db/admin-user.lock ]; then              echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"              #ensure wiredTigerCacheSize is set within the size of the containers memory limit              mongod --wiredTigerCacheSizeGB 0.5 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;            else              echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"              mongod --auth;            fi;          lifecycle:              postStart:                exec:                  command:                  - /bin/sh                  - -c                  - >                    if [ ! -f /data/db/admin-user.lock ]; then                      echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"                      #replaced simple sleep, with ping and test.                      while (! mongo --eval "db.adminCommand('ping')"); do sleep 10; echo "KUBERNETES LOG $HOSTNAME - waiting another 10 seconds for mongo to start" >> /data/db/configlog.txt; done;                      touch /data/db/admin-user.lock                      if [ "$HOSTNAME" = "mongo-db-0" ]; then                        echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"                        mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log                      fi;                      echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"                      mongod --shutdown;                    fi;          env:            - name: MONGODB_USERNAME              valueFrom:                secretKeyRef:                  name: mongo-init-credentials                  key: init.userid            - name: MONGODB_PASSWORD              valueFrom:                secretKeyRef:                  name: mongo-init-credentials                  key: init.password          ports:            - containerPort: 27017          livenessProbe:            exec:              command:              - mongo              - --eval              - "db.adminCommand('ping')"            initialDelaySeconds: 5            periodSeconds: 60            timeoutSeconds: 10          readinessProbe:            exec:              command:              - mongo              - --eval              - "db.adminCommand('ping')"            initialDelaySeconds: 5            periodSeconds: 60            timeoutSeconds: 10          resources:            requests:              memory: "350Mi"              cpu: 0.05            limits:              memory: "1Gi"              cpu: 0.1          volumeMounts:            - name: mongo-key              mountPath: "/etc/secrets-volume"              readOnly: true            - name: mongo-persistent-storage              mountPath: /data/db        - name: mongo-sidecar          image: cvallance/mongo-k8s-sidecar          env:            # Sidecar searches for any POD in the CLUSTER with these labels            # not just the namespace..so we need to ensure the POD is labelled            # to differentiate it from other PODS in different namespaces            - name: MONGO_SIDECAR_POD_LABELS              value: "app=mongo,environment=somenamespace"            - name: MONGODB_USERNAME              valueFrom:                secretKeyRef:                  name: mongo-init-credentials                  key: init.userid            - name: MONGODB_PASSWORD              valueFrom:                secretKeyRef:                  name: mongo-init-credentials                  key: init.password            #don't be fooled by this..it's not your DB that            #needs specifying, it's the admin DB as that            #is what you authenticate against with mongo.            - name: MONGODB_DATABASE              value: admin      volumes:      - name: mongo-key        secret:          defaultMode: 0400          secretName: mongo-key  volumeClaimTemplates:  - metadata:      name: mongo-persistent-storage      annotations:        volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"    spec:      accessModes: [ "ReadWriteOnce" ]      resources:        requests:          storage: 1Gi


Supposing you created a secret:

apiVersion: v1kind: Secretmetadata:  name: mysecrettype: Opaquedata:  username: YWRtaW4=  password: MWYyZDFlMmU2N2Rm

Here a snippet to get a value from a secret in a kubernetes yaml file:

 env:      - name: MONGO_INITDB_ROOT_PASSWORD        valueFrom:          secretKeyRef:            name: mysecret            key: password


I found this issue is related to a bug in docker-entrypoint.sh and occurs when numactl is detected on the node.

Try this simplified code (which moves numactl out of the way):

apiVersion: apps/v1kind: Deploymentmetadata:  name: mongo-deployment  labels:    app: mongospec:  replicas: 1  selector:    matchLabels:      app: mongo  template:    metadata:      labels:        app: mongo    spec:      containers:      - name: mongo        image: mongo:4.0.0        command:        - /bin/bash        - -c        # mv is not needed for later versions e.g. 3.4.19 and 4.1.7        - mv /usr/bin/numactl /usr/bin/numactl1 && source docker-entrypoint.sh mongod        env:        - name: MONGO_INITDB_ROOT_USERNAME          value: "xxxxx"        - name: MONGO_INITDB_ROOT_PASSWORD          value: "xxxxx"        ports:        - containerPort: 27017

I raised an issue at:https://github.com/docker-library/mongo/issues/330

Hopefully it will be fixed at some point so no need for the hack :o)