How to run Spark standalone on Kubernetes? How to run Spark standalone on Kubernetes? kubernetes kubernetes

How to run Spark standalone on Kubernetes?


For standalone spark on Kubernetes, the two canonical samples that exist are:

  1. https://github.com/kubernetes/charts/tree/master/stable/spark
  2. https://github.com/kubernetes/examples/tree/master/staging/spark

These are currently running outdated versions of Spark, and require updating to 2.1 and soon 2.2. (PRs are welcome :)).

The https://github.com/apache-spark-on-k8s/spark branch is not for standalone mode, but aims to enable Spark to directly launch on Kubernetes clusters. It will eventually be merged into upstream spark. Documentation, if you wish to make use of it, is here.

As of now, if you want to use Spark 2.1, options are: either to compile your own image, or packaging your application with the spark distribution in apache-spark-on-k8s


I first tried the simplest idea: Approach 3:

Build my own Docker image of my application including the Spark binary: http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/

Code example: https://github.com/phatak-dev/kubernetes-spark

It worked well.


Check my https://github.com/radanalyticsio/spark-operator

It deploys standalone spark on Kubernetes and OpenShift and supports also spark-on-k8s native scheduler. The default Spark version is 2.4.0

You can find the very quick start in the project's readme file, however here is a way to deploy the spark cluster using the operator:

# create operatorkubectl apply -f https://raw.githubusercontent.com/radanalyticsio/spark-operator/master/manifest/operator.yaml# create clustercat <<EOF | kubectl apply -f -apiVersion: v1kind: SparkClustermetadata:  name: my-clusterspec:  worker:    instances: "2"EOF