0% found this document useful (0 votes)
65 views2 pages

Hadoop Namenode and Datanode Setup Guide

The document provides steps to configure a Hadoop distributed file system (HDFS) with one Namenode and one Datanode. The steps include changing hostnames, configuring configuration files, specifying masters and slaves, generating SSH keys, formatting the namenode, and starting HDFS and MapReduce services. Upon completion, the jps command should show the expected HDFS and MapReduce daemon processes running on each node.

Uploaded by

Saurabh Kothari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views2 pages

Hadoop Namenode and Datanode Setup Guide

The document provides steps to configure a Hadoop distributed file system (HDFS) with one Namenode and one Datanode. The steps include changing hostnames, configuring configuration files, specifying masters and slaves, generating SSH keys, formatting the namenode, and starting HDFS and MapReduce services. Upon completion, the jps command should show the expected HDFS and MapReduce daemon processes running on each node.

Uploaded by

Saurabh Kothari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd

Namenode Datanode

Step 1 Command: sudo gedit /etc/hostname Command: sudo gedit /etc/hostname

change the previous hostname from change the previous hostname from Ubun
Ubuntu to Namenode to Datanode

Perform Restart of VM Perform Restart of VM


Step 2 Core-site.xml <configuration> <configuration>
<property> <property>
<name>fs.default.name</name> <name>fs.default.name</name>
<value>(ip of namenode):8020</value> <value>(ip of namenode):8020</value>
</property> </property>
</configuration> </configuration>
Step 3 Mapred-site.xml <configuration> <configuration>
<property> <property>
<name>mapred.job.tracker</name> <name>mapred.job.tracker</name>
<value>(ip of namenode):8021</value> <value>(ip of namenode):8021</value>
</property> </property>
</configuration> </configuration>
Step 4 Hdfs-site.xml <configuration> <configuration>
<property> <property>
<name>dfs.replication</name> <name>dfs.replication</name>
<value>2</value> <value>2</value>
</property> </property>
<property> <property>
<name>dfs.permissions</name> <name>dfs.permissions</name>
<value>false</value> <value>false</value>
</property> </property>
</configuration> </configuration>
Step 5 Masters Ip of namenode blank
e.g e.g
192.168.71.146
Step 6 Slaves Ip of both Ip of datanode
e.g e.g
192.168.71.146 192.168.71.147
192.168.71.147
Step 7 sudo gedit /etc/hosts 192.168.71.146 namenode 192.168.71.147 datanode

Remove all 127.X.X.X entries Remove all 127.X.X.X entries


Step 8 Commands cd .ssh cd .ssh
rm * rm *

Step 9 Commands ssh-keygen


ssh-copy-id –i <u/n of NN>@<ip of NN>
ssh-copy-id –I <u/n of NN>@<ip of DN>
e.g
ssh-copy-id –i [email protected]
ssh-copy-id –i [email protected]

Step 10 Commands hadoop namenode –format


start-dfs.sh
start-mapred.sh
jps
Step 11 Commands Jps Jps

Namenode Datanode
Secondary Namenode Tasktracker
Jobtracker

You might also like