Kubernetes 1.9 can't initialize SparkContext Kubernetes 1.9 can't initialize SparkContext kubernetes kubernetes

Kubernetes 1.9 can't initialize SparkContext


Try to alter the pod network with one method except Calico, check whether kube-dns work well.

To create a custom service account, a user can use the kubectl create serviceaccount command. For example, the following command creates a service account named spark:

$ kubectl create serviceaccount spark

To grant a service account a Role or ClusterRole, a RoleBinding or ClusterRoleBinding is needed. To create a RoleBinding or ClusterRoleBinding, a user can use the kubectl create rolebinding (or clusterrolebinding for ClusterRoleBinding) command. For example, the following command creates an edit ClusterRole in the default namespace and grants it to the spark service account created above:

$ kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default

Depending on the version and setup of Kubernetes deployed, this default service account may or may not have the role that allows driver pods to create pods and services under the default Kubernetes RBAC policies. Sometimes users may need to specify a custom service account that has the right role granted. Spark on Kubernetes supports specifying a custom service account to be used by the driver pod through the configuration property spark.kubernetes.authenticate.driver.serviceAccountName=. For example to make the driver pod use the spark service account, a user simply adds the following option to the spark-submit command:

spark-submit --master k8s://https://192.168.1.5:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark  --conf spark.kubernetes.container.image=leeivan/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar


I faced the same issue. If you're using minikube. Try deleting minikube using minikube delete and minikube start. Then create serviceaccount and clusterrolebinding


To add to openbrace's answer,and based on Ivan Lee's answer too,if you are using minikube,running the following command was enough for me:

kubectl create clusterrolebinding default --clusterrole=edit --serviceaccount=default:default --namespace=default

That way, I didn't have to change spark.kubernetes.authenticate.driver.serviceAccountName when using spark-submit.