Spark 1.6.1 SASL
You will need to set the spark.authenticate=true
in YARN as well.
Excerpted from YarnShuffleService.java in the Spark code base:
* The service also optionally supports authentication. This ensures that executors from one * application cannot read the shuffle files written by those from another. This feature can be * enabled by setting `spark.authenticate` in the Yarn configuration before starting the NM. * Note that the Spark application must also set `spark.authenticate` manually and, unlike in * the case of the service port, will not inherit this setting from the Yarn configuration. This * is because an application running on the same Yarn cluster may choose to not use the external * shuffle service, in which case its setting of `spark.authenticate` should be independent of * the service's.
You can do this by adding the following to core-site.xml
in your hadoop config.
<property> <name>spark.authenticate</name><value>true</value></property>