error in namenode starting error in namenode starting hadoop hadoop

error in namenode starting


"Stop it first".

  • First call stop-all.sh

  • Type jps

  • Call start-all.sh (or start-dfs.sh and start-mapred.sh)

  • Type jps (if namenode don't appear type "hadoop namenode" and check error)


According to running "stop-all.sh" on newer versions of hardoop, this is deprecated. You should instead use:

stop-dfs.sh

and

stop-yarn.sh


Today, while executing pig scripts I got the same error mentioned in the question:

starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-namenode-localhost.localdomain.outlocalhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directorylocalhost: Warning: $HADOOP_HOME is deprecated.localhost: localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-datanode-localhost.localdomain.outlocalhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directorylocalhost: Warning: $HADOOP_HOME is deprecated.localhost: localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-secondarynamenode-localhost.localdomain.outstarting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-jobtracker-localhost.localdomain.outlocalhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directorylocalhost: Warning: $HADOOP_HOME is deprecated.localhost: localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-tasktracker-localhost.localdomain.out

So, the answer is:

[training@localhost bin]$ stop-all.sh

and then type:

[training@localhost bin]$ start-all.sh

The issue will be resolved. Now you can run the pig script with mapreduce!