Configure kubectl command to access remote kubernetes cluster on azure Configure kubectl command to access remote kubernetes cluster on azure kubernetes kubernetes

Configure kubectl command to access remote kubernetes cluster on azure


Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below :

apiVersion: v1 clusters:    - cluster:    server: http://<master-ip>:<port>  name: test contexts:- context:    cluster: test    user: test  name: test

Then set context by executing:

kubectl config use-context test

After this you should be able to interact with the cluster.

Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/

Alternately, you can also try following command

kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1kubectl config use-context test-cluster


You can also define the filepath of kubeconfig by passing in --kubeconfig parameter.

For example, copy ~/.kube/config of the remote Kubernetes host to your local project's ~/myproject/.kube/config. In ~/myproject you can then list the pods of the remote Kubernetes server by running kubectl get pods --kubeconfig ./.kube/config.

Do notice that when copying the values from the remote Kubernetes server simple kubectl config view won't be sufficient, as it won't display the secrets of the config file. Instead, you have to do something like cat ~/.kube/config or use scp to get the full file contents.

See: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/


For anyone landing into this question, az cli solves the problem.

az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup

This will merge the Azure context in your local .kube\config (in case you have a connection already set up, mine was C:\Users\[user]\.kube\config) and switch to the Azure Kubernetes Service connection.

Reference