Hadoop: binding multiple IP addresses to a cluster NameNode Hadoop: binding multiple IP addresses to a cluster NameNode hadoop hadoop

Hadoop: binding multiple IP addresses to a cluster NameNode


The asker edited this into his question as an answer:

In hdfs-site.xml, set the value of dfs.namenode.rpc-bind-host to 0.0.0.0 and Hadoop will listen on both the private and public network interfaces allowing remote access and datanode access.


HDFS Support for Multihomed Networks and was done on the Cloudera HDFS Support for Multihomed Networks. Parameters for Multi-Homing for Hortonworks

<property>  <name>dfs.namenode.rpc-bind-host</name>  <value>0.0.0.0</value>  <description>    The actual address the RPC server will bind to. If this optional address is    set, it overrides only the hostname portion of dfs.namenode.rpc-address.    It can also be specified per name node or name service for HA/Federation.    This is useful for making the name node listen on all interfaces by    setting it to 0.0.0.0.  </description></property>

Additionally, is recommended to change dfs.namenode.rpc-bind-host, dfs.namenode.servicerpc-bind-host, dfs.namenode.http-bind-host and dfs.namenode.https-bind-host

By default HDFS endpoints are specified as either hostnames or IP addresses. In either case HDFS daemons will bind to a single IP address making the daemons unreachable from other networks.

The solution is to have separate setting for server endpoints to force binding the wildcard IP address INADDR_ANY i.e. 0.0.0.0. Do NOT supply a port number with any of these settings.

NOTE: Prefer using hostnames over IP addresses in master/slave configuration files.

<property>  <name>dfs.namenode.rpc-bind-host</name>  <value>0.0.0.0</value>  <description>    The actual address the RPC server will bind to. If this optional address is    set, it overrides only the hostname portion of dfs.namenode.rpc-address.    It can also be specified per name node or name service for HA/Federation.    This is useful for making the name node listen on all interfaces by    setting it to 0.0.0.0.  </description></property><property>  <name>dfs.namenode.servicerpc-bind-host</name>  <value>0.0.0.0</value>  <description>    The actual address the service RPC server will bind to. If this optional address is    set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.    It can also be specified per name node or name service for HA/Federation.    This is useful for making the name node listen on all interfaces by    setting it to 0.0.0.0.  </description></property><property>  <name>dfs.namenode.http-bind-host</name>  <value>0.0.0.0</value>  <description>    The actual adress the HTTP server will bind to. If this optional address    is set, it overrides only the hostname portion of dfs.namenode.http-address.    It can also be specified per name node or name service for HA/Federation.    This is useful for making the name node HTTP server listen on all    interfaces by setting it to 0.0.0.0.  </description></property><property>  <name>dfs.namenode.https-bind-host</name>  <value>0.0.0.0</value>  <description>    The actual adress the HTTPS server will bind to. If this optional address    is set, it overrides only the hostname portion of dfs.namenode.https-address.    It can also be specified per name node or name service for HA/Federation.    This is useful for making the name node HTTPS server listen on all    interfaces by setting it to 0.0.0.0.  </description></property>

Note: Before starting the modification stop the agent and server as follow:

  1. service cloudera-scm-agent stop
  2. service cloudera-scm-server stop

If your cluster is configured with primary and secondary NameNodes than this modification need to take place in both nodes. Modification is done with server and agent stopped

After completion and saving of the hdfs-site.xml file start the server and agent on NameNodes and also agent on DataNodes (this won't hurt the cluster if is done too) using the following:

  1. service cloudera-scm-agent start
  2. service cloudera-scm-server start

Same solution can be implemented for IBM BigInsights:

    To configure HDFS to bind to all the interfaces , add the following configuration variable using Ambari under the section HDFS-> Configs ->Advanced -> Custom hdfs-site    dfs.namenode.rpc-bind-host = 0.0.0.0    Restart HDFS to apply the configuration change .     Verify if port 8020 is bound and listening to requests from all the interfaces using the following command.     netstat -anp|grep 8020    tcp 0 0 0.0.0.0:8020 0.0.0.0:* LISTEN 15826/java

IBM BigInsights: How to configure Hadoop client port 8020 to bind to all the network interfaces?

In Cloudera in the HDFS configuration is a property called

In the HDFS configuration in Cloudera there is a property called Bind NameNode to Wildcard Address and just need to check the box and it will bind the service on 0.0.0.0

then restart hdfs service

 On the Home > Status tab, click  to the right of the service name and select Restart. Click Start on the next screen to confirm. When you see a Finished status, the service has restarted.

Starting, Stopping, Refreshing, and Restarting a ClusterStarting, Stopping, and Restarting Services