DataStax Enterprise on Docker: fails to start due to /hadoop/conf directory not being writable
Adding 'chown -RHh cassandra:cassandra /opt/dse' to the entrypoint script solved my problem of not being able to write to /opt/dse/resources/hadoop/conf.
Re. ERROR 04:15:04,789 SPARK-WORKER Logging.scala:74 - Failed to create work directory /var/lib/spark/worker
Check your spark-env.sh, and see your directory mappings. In my case, i have mounted two external volumes - /data and /logs. Both these directories are owned by cassandra:cassandra.
# This is a base directory for Spark Worker work files.if [ "x$SPARK_WORKER_DIR" = "x" ]; then export SPARK_WORKER_DIR="/data/spark/worker"fiif [ "x$SPARK_LOCAL_DIRS" = "x" ]; then export SPARK_LOCAL_DIRS="/data/spark/rdd"fi# This is a base directory for Spark Worker logs.if [ "x$SPARK_WORKER_LOG_DIR" = "x" ]; then export SPARK_WORKER_LOG_DIR="/logs/spark/worker"fi# This is a base directory for Spark Master logs.if [ "x$SPARK_MASTER_LOG_DIR" = "x" ]; then export SPARK_MASTER_LOG_DIR="/logs/spark/master"fi
This video shows fully functional DSE Enterprise running on docker: https://vimeo.com/181393134
I added chown -RHh cassandra:cassandra /opt/dse
in the setup_node()
portion of DSE Startup Script (Called by Docker container on startup) and it fixed the issue. Check out chown --help
for more info on those options.
NOTE: I'm now getting a
ERROR 04:15:04,789 SPARK-WORKER Logging.scala:74 - Failed to create work directory /var/lib/spark/worker
later on, but at least my fix will get you past your initial issue.
setup_node() { printf "* Setting up node...\n" printf " + Setting up node...\n" create_dirs tweak_cassandra_config "$DSE_HOME/resources/cassandra/conf" tweak_dse_in_sh "$DSE_HOME/bin" tweak_spark_config "$DSE_HOME/resources/spark/conf" tweak_agent_config tweak_dse_config "$DSE_HOME/resources/dse/conf" chown -R cassandra:cassandra /data /logs /conf chown -RHh cassandra:cassandra /opt/dse # mark that we tweaked configs touch "$DSE_HOME/tweaked_configs" printf "Done.\n"}
Doing this:
I added chown -RHh cassandra:cassandra /opt/dse in the setup_node() portion of DSE Startup Script (Called by Docker container on startup)
as answered by Max worked for me, but instead of his issue I got
Unable to activate plugin com.datastax.bdp.plugin.DseFsPlugin(...)java.io.IOException: Failed to create work directory: /var/lib/dsefs
So I had to turn my setup_node() to this
setup_node() { printf "* Setting up node...\n" printf " + Setting up node...\n" create_dirs tweak_cassandra_config "$DSE_HOME/resources/cassandra/conf" tweak_dse_in_sh "$DSE_HOME/bin" tweak_spark_config "$DSE_HOME/resources/spark/conf" tweak_agent_config chown -R cassandra:cassandra /data /logs /conf mkdir /var/lib/dsefs chown -RHh cassandra:cassandra /opt/dse /var/lib/dsefs # mark that we tweaked configs touch "$DSE_HOME/tweaked_configs" printf "Done.\n"}