hadoop's datanode is not starting
First delete all contents from hdfs folder:
Value of <name>hadoop.tmp.dir</name>
rm -rf /usr/local/hadoop_store
Make sure that dir has right owner and permission /usr/local/hadoop_store
hduser@localhost$sudo chown hduser:hadoop -R /usr/local/hadoop_storehduser@localhost$sudo chmod 777 -R /usr/local/hadoop_store
Format the namenode:
hduser@localhost$hadoop namenode -format
Start all processes again
I also got the same problem,and I fixed them by changing the owner of those working directory.Although you have permissions 777 to these two directory,framework will not be able to use it unless you change the owner to hduser.
$ sudo chown -R hduser:hadoop /usr/local/hadoop/yarn_data/hdfs/namenode
$ sudo chown -R hduser:hadoop /usr/local/hadoop/yarn_data/hdfs/datanode
After this,you again start your cluster and you should see datanode running.
- first stop all the entities like namenode, datanode etc. (you willbe having some script or command to do that)
- Format tmp directory
- Go to /var/cache/hadoop-hdfs/hdfs/dfs/ and delete all the contents in the directory manually
- Now format your namenode again
- start all the entities then use jps command to confirm that the datanode has been started
- Now run whichever application you may like or have.
Hope this helps.