Can't connect to MariaDB by hostname within a Kubernetes cluster Can't connect to MariaDB by hostname within a Kubernetes cluster kubernetes kubernetes

Can't connect to MariaDB by hostname within a Kubernetes cluster


Since you are using Kubernetes Deployment, the name of your pods will be generated dinamically based on the name you gave in spec file, in your example, the pods will be create with the name db-xxxxxxxxxx-xxxxx.

In order to make a 'fixed' hostname, you need to create a service for reach your pods, example:

apiVersion: v1kind: Servicemetadata:  name: dbspec:  selector:    name: db  ports:    - protocol: TCP      port: 3306      targetPort: 3306  type: ClusterIP

And to check if was successfully deployed:

$ kubectl get svc dbNAME   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGEdb     ClusterIP   10.96.218.18   <none>        3306/TCP   89s

The fullname of your service will be: <name>.<namespace>.cluster.local in this case using default namespace will be db.default.cluster.local pointing to ip 10.96.218.18 as showed in example above.

To reach your service you need to configure your /etc/hosts with his information:

echo -ne "10.96.218.18\tdb.default.cluster.local db db.default" >> /etc/hosts

After that you will be able to reach your service by dns:

$ dig +short db10.96.218.18$ mysql -h db -uroot -pEnter password: Welcome to the MySQL monitor.  Commands end with ; or \g.Your MySQL connection id is 10Server version: 5.5.5-10.4.12-MariaDB-1:10.4.12+maria~bionic mariadb.org binary distributionCopyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> 

Just for you know, you could also use HELM template to setup a mariadb with replication. See this article

References:

https://kubernetes.io/docs/concepts/services-networking/service/

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/


to be able to access the service from the host node you need to define a service object in Kubernetes

so the complete k8s objects should look like the below snippet PersistentVolumeClaim

apiVersion: v1kind: PersistentVolumeClaimmetadata:  creationTimestamp: null  labels:    io.kompose.service: db-data  name: db-dataspec:  accessModes:  - ReadWriteOnce  resources:    requests:      storage: 100Mistatus: {}

Service

apiVersion: v1kind: Servicemetadata:  labels:    app: mysql  name: mysqlspec:  ports:  - port: 3306    targetPort: 3306  selector:    app: mysql  type: ClusterIP

Deployment

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: mysql  name: mysqlspec:  replicas: 1  template:    metadata:      creationTimestamp: null      labels:        app: mysql    spec:      containers:      - name: mysql        env:        - name: MYSQL_ROOT_PASSWORD          value: dummy        - name: MYSQL_DATABASE          value: community_db        resources: {}        volumeMounts:          - mountPath: /var/lib/mysql            name: db-data        image: mysql:5.7        ports:        - containerPort: 3306      volumes:      - name: db-data        persistentVolumeClaim:          claimName: db-data      restartPolicy: Always