Install Spark on an existing Hadoop cluster Install Spark on an existing Hadoop cluster linux linux

Install Spark on an existing Hadoop cluster


If you have Hadoop already installed on your cluster and want to run spark on YARN it's very easy:

Step 1: Find the YARN Master node (i.e. which runs the Resource Manager). The following steps are to be performed on the master node only.

Step 2: Download the Spark tgz package and extract it somewhere.

Step 3: Define these environment variables, in .bashrc for example:

# Spark variablesexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoopexport SPARK_HOME=<extracted_spark_package>export PATH=$PATH:$SPARK_HOME/bin

Step 4: Run your spark job using the --master option to yarn-client or yarn-master:

spark-submit \--master yarn-client \--class org.apache.spark.examples.JavaSparkPi \$SPARK_HOME/lib/spark-examples-1.5.1-hadoop2.6.0.jar \100

This particular example uses a pre-compiled example job which comes with the Spark installation.

You can read this blog post I wrote for more details on Hadoop and Spark installation on a cluster.

You can read the post which follows to see how to compile and run your own Spark job in Java. If you want to code jobs in Python or Scala, its convenient to use a notebook like IPython or Zeppelin. Read more about how to use those with your Hadoop-Spark cluster here.