kubectl logs -f gets "Authorization error" kubectl logs -f gets "Authorization error" kubernetes kubernetes

kubectl logs -f gets "Authorization error"


You would need to create a ClusterRoleBinding with a Role pointing towards the user : kube-apiserver-kubelet-client

kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: kubelet-api-adminsubjects:- kind: User  name: kube-apiserver-kubelet-client  apiGroup: rbac.authorization.k8s.ioroleRef:  kind: ClusterRole  name: system:kubelet-api-admin  apiGroup: rbac.authorization.k8s.io

kubelet-api-admin is usually a role that has the necessary permissions, but you can replace this with an apt role.


This may happen if your aws-auth config map is broken / empty. And it may happen if, for example, you run multiple eksctl operations in parallel.


On a prem cluster, I had an issue where I changed the DNS address of the master. You will need to change the dns name in the /etc/kubernetes/kubelet.conf on each node then sudo systemctl restart kublet.service.