Hadoop pseudo distributed mode - Datanode and tasktracker not starting Hadoop pseudo distributed mode - Datanode and tasktracker not starting hadoop hadoop

Hadoop pseudo distributed mode - Datanode and tasktracker not starting


Amend your /etc/hosts to include a hostname loopback mapping:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4127.0.1.1   is-joshbloom-hadoop::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

Your problem is your machine doesn't know how to resolve the hostname is-joshbloom-hadoop to a specific IP address. There are typically two places/methods resolution occurs - either via a DNS server or using a local hosts file (hosts file takes precedence).

The above amendment to your hosts file allows you machine to resolve the machine name is-joshbloom-hadoop to the IP address 127.0.1.1. The OS has an internal loopback address for the range 127.0.0.0/8, so you could name any address in here. On my Ubuntu laptop, it uses the 127.0.1.1 and i'm sure it changes between OS's, but my guess is by not using 127.0.0.1 you don't have to search for it in the localhost line if you change your machine name in future.


Check your core-site.xml in HADOOP_HOME/conf.It will have fs.default.name property.It should have a hostname specified in your /etc/hosts."is-joshbloom-hadoop" hostname is not there in /etc/hosts.Use localhost instead

<property>  <name>fs.default.name</name>  <value>hdfs://localhost:54310</value></property></configuration>


Problem seems to be like that u have nothing in slaves file under conf/slaves.

Check your slaves file in conf/slaves.Remove everything and add localhost in that file.Remove the name and data directory mentioned in dfs.name.dir and dfs.data.dir property under hdfs-site.xml.

Format ur HDFS file system and then start ur daemons again.