pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "pods" in API group "" in the namespace "default" pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "pods" in API group "" in the namespace "default" kubernetes kubernetes

pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "pods" in API group "" in the namespace "default"


You probably need to bind the dashboard service account to the cluster admin role:

kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa

Otherwise, the dashboard services account doesn't have access to the data that would populate the dashboard.


I am answering this based on my experience with v2.1.0 with K8s v1.20.When kubernetes-dashboard is installed, it created a service account and two roles called "kubernetes-dashboard" and binds the roles with the dashboard namespace and the other with a cluster-wide role (but not cluster-admin). So, unfortunately the permissions are not sufficient to manage the entire cluster, as can be seen here:

default account unable to see cluster data

Log from installation:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yamlnamespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper created

Looking at the permissions you see:

$ kubectl describe clusterrole kubernetes-dashboardName:         kubernetes-dashboardLabels:       k8s-app=kubernetes-dashboardAnnotations:  <none>PolicyRule:Resources             Non-Resource URLs  Resource Names  Verbs ---------             -----------------  --------------  -----nodes.metrics.k8s.io  []                 []              [get list watch]pods.metrics.k8s.io   []                 []              [get list watch]$ kubectl describe role kubernetes-dashboard -n kubernetes-dashboardName:         kubernetes-dashboardLabels:       k8s-app=kubernetes-dashboardAnnotations:  <none>PolicyRule:Resources       Non-Resource URLs  Resource Names                     Verbs---------       -----------------  --------------                     -----secrets         []                 [kubernetes-dashboard-certs]       [get update delete]secrets         []                 [kubernetes-dashboard-csrf]        [get update delete]secrets         []                 [kubernetes-dashboard-key-holder]  [get update delete]configmaps      []                 [kubernetes-dashboard-settings]    [get update]services/proxy  []                 [dashboard-metrics-scraper]        [get]services/proxy  []                 [heapster]                         [get]services/proxy  []                 [http:dashboard-metrics-scraper]   [get]services/proxy  []                 [http:heapster:]                   [get]services/proxy  []                 [https:heapster:]                  [get]services        []                 [dashboard-metrics-scraper]        [proxy]services        []                 [heapster]                         [proxy]

Rather than making the kubernetes-dashboard service account a cluster-admin, as that account is used for data collection, a better approach is to create a new service account which only has a Token and that ways the account can easily be revoked instead of permissions changed for pre-created account.

To create a new service account called "dashboard-admin" and apply declaratively:

$ nano dashboard-svcacct.yamlapiVersion: v1kind: ServiceAccountmetadata:name: dashboard-adminnamespace: kubernetes-dashboard$ kubectl apply -f dashboard-svcacct.yamlserviceaccount/dashboard-admin created

To bind that new service account to a cluster admin role:

$ nano dashboard-binding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: dashboard-adminroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: dashboard-admin  namespace: kubernetes-dashboard$ kubectl apply -f dashboard-binding.yamlclusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

To extract the token from this service account which can be used to login:

$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')Name:         dashboard-admin-token-4fxttNamespace:    kubernetes-dashboardLabels:       <none>Annotations:  kubernetes.io/service-account.name: dashboard-admin              kubernetes.io/service-account.uid: 9cd5bb80-7901-413b-9eac-7b72c353d4b9Type:  kubernetes.io/service-account-tokenData====ca.crt:     1066 bytesnamespace:  20 bytestoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ikp3ZERpQTFPOV<REDACTED>

The entire token which starts with "eyJ" can be used to login now:

enter image description here

But cut & paste of the token login can become a pain in the rear, especially given default timeout. I prefer a config file. For this option the cluster CA hash will be needed. The cluster part of this this config file is the same as the config file under ~/.kube/config. This config file does not need to be loaded to the kubernetes master, just need it on the workstation with the browser from which the dashboard is being accessed. I named it dashboard-config and used VS Code to create it (any editor, just need to make sure that you unwrap the text to make sure no spaces in the hash values). There is no need to keep any of the admin CA and Private Key hashes under users: if copying the config file.

apiVersion: v1clusters:- cluster:    certificate-authority-data: <CLUSTER CA HASH HERE>    server: https://<IP ADDR OF CLUSTER>:6443  name: kubernetes #name of clustercontexts:- context:   cluster: kubernetes   user: dashboard-admin  name: dashboard-admin@kubernetescurrent-context: dashboard-admin@kuberneteskind: Configpreferences: {}users:- name: kubernetes-dashboard  user:    token: <TOKEN HASH from above command e.g. eyJ>

And it works now.