hadoop namenode port in use hadoop namenode port in use hadoop hadoop

hadoop namenode port in use


Found the issue. This came from a short history of this server where the IP address changed, but the /etc/hosts file just had the new one appended to it rather than replaced. I think this was confusing the Hadoop start up as it was trying to open 50070 on a non-existent interface. The error being "port in use" made this a little confusing.


download osquery https://code.facebook.com/projects/658950180885092

and install

issue this command osqueryi

when prompt appears use this sql command to see all running java processes and find the pids

SELECT name, path, pid FROM processes where name= "java";

you will get something that looks like this on a mac

  +------+--------------------------------------------------------------------------+-------+  | name | path                                                                     | pid   |  +------+--------------------------------------------------------------------------+-------+  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59446 |  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59584 |  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59676 |  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59790 |  +------+--------------------------------------------------------------------------+-------+

issue the sudo kill - PID command on all the processes to make sure that it kills the port in use 0.0.0.0:50070

then after all this retry the sbin/start-dfs.sh and the namenode should now show


Use the following command for checking the processes which are running and using java:

ps aux | grep java

After that, kill all the processes which are related to hadoop using the PID from above command as follows:

sudo kill -9 PID