SSH into kubernetes nodes created through KOPS SSH into kubernetes nodes created through KOPS kubernetes kubernetes

SSH into kubernetes nodes created through KOPS


You can't recover the private key, but you should be able install a new public key following this procedure:

kops delete secret --name <clustername> sshpublickey adminkops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pubkops update cluster --yes to reconfigure the auto-scaling groupskops rolling-update cluster --name <clustername> --yes to immediately roll all the machines so they have the new key (optional)

Taken from this document:

https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access


This gives me all the secrets. Can I use this to get ssh access to the nodes and how to do it?

Not really. These are secrets to access the kube-apiserver in the cluster. For example, for you to be able to run kubectl commands.

I see the cluster state is stored in S3. Does it store the secret key as well?

It's stored in S3 but not the ssh keys to access the servers. Those are stored in AWS under 'Key Pairs'.

keypair

Unfortunately, you can only get your private key that you can use to log in only once (when you create the keypair). So I think you are out of luck if you don't have the private key. If you have access to the AWS console you could snapshot the root drive of your instances and recreate your nodes (or control plane) one by one with a different AWS keypair that you have the private key for.


In my case when I installed the cluster with Kops I had to run ssh-keygen like below that created id_rsa.pub/pvt keys. This is allowing me to simply do a ssh or

ssh-keygenkops create secret --name ${KOPS_CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub

and then created the cluster with

kops update cluster --name ${KOPS_CLUSTER_NAME} --yesssh admin@ec2-13-59-4-99.us-east-2.compute.amazonaws.com