Hadoop is an Apache open source framework written in java that allows distributed processing of large datasets across clusters of computers using simple programming models.
The Hadoop framework application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is designed to scale up from single server to thousands of machines, each offering local computation and storage.
Pre Requirements
1) A machine with Ubuntu 14.04 LTS operating system installed.
2) Apache Hadoop 2.6.4 Software (Download Here)
Fully Distributed Mode (Multi Node Cluster)
This post descibes how to install and configure Hadoop clusters ranging from a few nodes to extremely large clusters. To play with Hadoop, you may first want to install it on a single machine (see Single Node Setup).

On All machines – (HadoopMaster, HadoopSlave1, HadoopSlave2)
Step 1 – Update. Open a terminal (CTRL + ALT + T) and type the following sudo command. It is advisable to run this before installing any package, and necessary to run it to install the latest updates, even if you have not added or removed any Software Sources.
1 |
$ sudo apt-get update |
Step 2 – Installing Java 7.
1 |
$ sudo apt-get install openjdk-<strong>7</strong>-jdk |
Step 3 – Install open-ssh server. It is a cryptographic network protocol for operating network services securely over an unsecured network. The best known example application is for remote login to computer systems by users.
1 |
$ sudo apt-get install openssh-server |
Step 4 – Edit /etc/hosts file.
1 |
$ sudo gedit /etc/hosts |
/etc/hosts file. Add all machines IP address and hostname. Save and close.
1 2 3 |
<strong>192.168</strong>.<strong>2.14</strong> HadoopMaster <strong>192.168</strong>.<strong>2.15</strong> HadoopSlave1 <strong>192.168</strong>.<strong>2.16</strong> HadoopSlave2 |
Step 5 – Create a Group. We will create a group, configure the group sudo permissions and then add the user to the group. Here ‘hadoop’ is a group name and ‘hduser’ is a user of the group.
1 |
$ sudo addgroup hadoop |
1 |
$ sudo adduser --ingroup hadoop hduser |
Step 6 – Configure the sudo permissions for ‘hduser’.
1 |
$ sudo visudo |
Since by default ubuntu text editor is nano we will need to use CTRL + O to edit.
1 |
ctrl+O |
Add the permissions to sudoers.
1 |
hduser ALL=(ALL) ALL |
Use CTRL + X keyboard shortcut to exit out. Enter Y to save the file.
1 |
ctrl+x |
Step 7 – Creating hadoop directory.
1 |
$ sudo mkdir /usr/local/hadoop |
Step 8 – Change the ownership and permissions of the directory /usr/local/hadoop. Here ‘hduser’ is an Ubuntu username.
1 |
$ sudo chown -R hduser /usr/local/hadoop |
1 |
$ sudo chmod -R <strong>755</strong> /usr/local/hadoop |
Step 9 – Creating /app/hadoop/tmp directory.
1 |
$ sudo mkdir /app/hadoop/tmp |
Step 10 – Change the ownership and permissions of the directory /app/hadoop/tmp. Here ‘hduser’ is an Ubuntu username.
1 |
$ sudo chown -R hduser /app/hadoop/tmp |
1 |
$ sudo chmod -R <strong>755</strong> /app/hadoop/tmp |
Step 11 – Switch User, is used by a computer user to execute commands with the privileges of another user account.
1 |
$ su hduser |
Step 12 – Generating a new SSH public and private key pair on your local computer is the first step towards authenticating with a remote server without a password. Unless there is a good reason not to, you should always authenticate using SSH keys.
1 |
$ ssh-keygen -t rsa -P "" |
Step 13 – Now you can add the public key to the authorized_keys
1 |
$ cat $HOME/.<strong>ssh</strong>/id_rsa.<strong>pub</strong> >> $HOME/.<strong>ssh</strong>/authorized_keys |
Step 14 – Adding hostname to list of known hosts. A quick way of making sure that ‘hostname’ is added to the list of known hosts so that a script execution doesn’t get interrupted by a question about trusting computer’s authenticity.
1 |
$ ssh hostname |
Only on HadoopMaster Machine
Step 15 – Switch User, is used by a computer user to execute commands with the privileges of another user account.
1 |
$ su hduser |
Step 16 – ssh-copy-id is a small script which copy your ssh public-key to a remote host; appending it to your remote authorized_keys.
1 |
$ ssh-copy-id -i $HOME/.<strong>ssh</strong>/id_rsa.<strong>pub</strong> hduser@<strong>192.168</strong>.<strong>2.15</strong> |
Step 17 – ssh is a program for logging into a remote machine and for executing commands on a remote machine. Check remote login works or not.
1 |
$ ssh <strong>192.168</strong>.<strong>2.15</strong> |
Step 18 – Exit from remote login.
1 |
$ exit |
Same steps 16, 17 and 18 for other machines (HadoopSalve2).
1 |
$ ssh-copy-id -i $HOME/.<strong>ssh</strong>/id_rsa.<strong>pub</strong> hduser@<strong>192.168</strong>.<strong>2.16</strong> |
1 |
$ ssh <strong>192.168</strong>.<strong>2.16</strong> |
1 |
$ exit |
Step 19 – Change the directory to /home/hduser/Desktop , In my case the downloaded hadoop-2.6.4.tar.gz file is in /home/hduser/Desktop folder. For you it might be in /downloads folder check it.
1 |
$ cd /home/hduser/Desktop/ |
Step 20 – Untar the hadoop-2.6.4.tar.gz file.
1 |
$ tar xzf hadoop-<strong>2.6</strong>.<strong>4</strong>.<strong>tar</strong>.<strong>gz</strong> |
Step 21 – Move the contents of hadoop-2.6.4 folder to /usr/local/hadoop
1 |
$ mv hadoop-<strong>2.6</strong>.<strong>4</strong>/* /usr/local/hadoop |
Step 22 – Edit $HOME/.bashrc file by adding the java and hadoop path.
1 |
$ sudo gedit $HOME/.<strong>bashrc</strong> |
$HOME/.bashrc file. Add the following lines
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# Set Hadoop-related environment variables export HADOOP_HOME=/usr/local/hadoop export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/<strong>native</strong> PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/native" # Set JAVA_HOME (we will also configure JAVA_HOME directly <strong>for</strong> Hadoop later on) export JAVA_HOME=/usr/lib/jvm/java-<strong>7</strong>-openjdk-amd64 |
Step 23 – Reload your changed $HOME/.bashrc settings
1 |
$ source $HOME/.<strong>bashrc</strong> |
Step 24 – Change the directory to /usr/local/hadoop/etc/hadoop
1 |
$ cd $HADOOP_HOME/etc/hadoop |
Step 25 – Edit hadoop-env.sh file.
1 |
$ sudo gedit hadoop-env.<strong>sh</strong> |
Step 26 – Add the below lines to hadoop-env.sh file. Save and Close.
1 2 |
# remove comment and change java_HOME export JAVA_HOME=/usr/lib/jvm/java-<strong>7</strong>-openjdk-amd64 |
Step 27 – Edit core-site.xml file.
1 |
$ sudo gedit core-site.<strong>xml</strong> |
Step 28 – Add the below lines to core-site.xml file. Save and Close.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
<property> <name>fs.<strong>default</strong>.<strong>name</strong></name> <value>hdfs:<em>//HadoopMaster:9000</value></em> </property> <property> <name>dfs.<strong>permissions</strong></name> <value><strong>false</strong></value> </property> <property> <name>hadoop.<strong>tmp</strong>.<strong>dir</strong></name> <value>/app/hadoop/tmp</value> <description>A base <strong>for</strong> other temporary directories.</description> </property> |
Step 29 – Edit hdfs-site.xml file.
1 |
$ sudo gedit hdfs-site.<strong>xml</strong> |
Step 30 – Add the below lines to hdfs-site.xml file. Save and Close.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
<property> <name>dfs.<strong>name</strong>.<strong>dir</strong></name> <value>/app/hadoop/tmp/namenode</value> </property> <property> <name>dfs.<strong>data</strong>.<strong>dir</strong></name> <value>/app/hadoop/tmp/datanode</value> </property> <property> <name>dfs.<strong>replication</strong></name> <value><strong>2</strong></value> </property> <property> <name>dfs.<strong>permissions</strong></name> <value><strong>false</strong></value> </property> <property> <name>dfs.<strong>datanode</strong>.<strong>use</strong>.<strong>datanode</strong>.<strong>hostname</strong></name> <value><strong>false</strong></value> </property> <property> <name>dfs.<strong>namenode</strong>.<strong>datanode</strong>.<strong>registration</strong>.<strong>ip</strong>-hostname-check</name> <value><strong>false</strong></value> </property> <property> <name>dfs.<strong>namenode</strong>.<strong>http</strong>-address</name> <value>HadoopMaster:<strong>50070</strong></value> <description>Your NameNode hostname <strong>for</strong> http access.</description> </property> <property> <name>dfs.<strong>namenode</strong>.<strong>secondary</strong>.<strong>http</strong>-address</name> <value>HadoopMaster:<strong>50090</strong></value> <description>Your Secondary NameNode hostname <strong>for</strong> http access.</description> </property> |
Step 31 – Edit yarn-site.xml file.
1 |
$ sudo gedit yarn-site.<strong>xml</strong> |
Step 32 – Add the below lines to yarn-site.xml file. Save and Close.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
<property> <name>yarn.<strong>nodemanager</strong>.<strong>aux</strong>-services</name> <value>mapreduce.<strong>shuffle</strong></value> <description>Long running service which executes on Node Manager(s) and provides MapReduce Sort and Shuffle functionality.</description> </property> <property> <name>yarn.<strong>log</strong>-aggregation-enable</name> <value><strong>true</strong></value> <description>Enable log aggregation so application logs are moved onto hdfs and are viewable via web ui after the application completed. The <strong>default</strong> location on hdfs is '/log' and can be changed via yarn.<strong>nodemanager</strong>.<strong>remote</strong>-app-log-dir property</description> </property> <property> <name>yarn.<strong>resourcemanager</strong>.<strong>scheduler</strong>.<strong>address</strong></name> <value>HadoopMaster:<strong>8030</strong></value> </property> <property> <name>yarn.<strong>resourcemanager</strong>.<strong>resource</strong>-tracker.<strong>address</strong></name> <value>HadoopMaster:<strong>8031</strong></value> </property> <property> <name>yarn.<strong>resourcemanager</strong>.<strong>address</strong></name> <value>HadoopMaster:<strong>8032</strong></value> </property> <property> <name>yarn.<strong>resourcemanager</strong>.<strong>admin</strong>.<strong>address</strong></name> <value>HadoopMaster:<strong>8033</strong></value> </property> <property> <name>yarn.<strong>resourcemanager</strong>.<strong>webapp</strong>.<strong>address</strong></name> <value>HadoopMaster:<strong>8088</strong></value> </property> |
Step 33 – Edit mapred-site.xml file.
1 |
$ sudo gedit mapred-site.<strong>xml</strong> |
Step 34 – Add the below lines to mapred-site.xml file. Save and Close.
1 2 3 4 5 6 7 8 9 |
<property> <name>mapred.<strong>job</strong>.<strong>tracker</strong></name> <value>HadoopMaster:<strong>9001</strong></value> </property> <property> <name>mapreduce.<strong>framework</strong>.<strong>name</strong></name> <value>yarn</value> </property> |
Step 35 – Edit slaves file.
1 |
$ sudo gedit slaves |
Step 36 – Add the below line to slaves file. Save and Close.
1 2 |
<strong>192.168</strong>.<strong>2.15</strong> <strong>192.168</strong>.<strong>2.16</strong> |
Step 37 – Secure copy or SCP is a means of securely transferring computer files between a local host and a remote host or between two remote hosts. Here we are transferring configured hadoop files from master to slave nodes.
1 2 |
$ scp -r /usr/local/hadoop/* hduser@<strong>192.168</strong>.<strong>2.15</strong>:/usr/local/hadoop $ scp -r /usr/local/hadoop/* hduser@<strong>192.168</strong>.<strong>2.16</strong>:/usr/local/hadoop |
Step 38 – Here we are transferring configured .bashrc file from master to slave nodes.
1 2 |
$ scp -r $HOME/.<strong>bashrc</strong> hduser@<strong>192.168</strong>.<strong>2.15</strong>:$HOME/.<strong>bashrc</strong> $ scp -r $HOME/.<strong>bashrc</strong> hduser@<strong>192.168</strong>.<strong>2.16</strong>:$HOME/.<strong>bashrc</strong> |
Step 39 – Change the directory to /usr/local/hadoop/sbin
1 |
$ cd /usr/local/hadoop/sbin |
Step 40 – Format the datanode.
1 |
$ hadoop namenode -format |
Step 41 – Start NameNode daemon and DataNode daemon.
1 |
$ start-dfs.<strong>sh</strong> |
Step 42 – Start yarn daemons.
1 |
$ start-yarn.<strong>sh</strong> |
OR
Instead of steps 41 and 42 you can use below command. It is deprecated now.
1 |
$ start-all.<strong>sh</strong> |
Step 43 – The JPS (Java Virtual Machine Process Status Tool) tool is limited to reporting information on JVMs for which it has the access permissions.
1 |
$ jps |
Only on slave machines – (HadoopSlave1 and HadoopSlave2)
1 |
hduser@HadoopSlave1:~$ jps |
1 |
hduser@HadoopSlave2:~$ jps |
Only on HadoopMaster Machine
Once the Hadoop cluster is up and running check the web-ui of the components as described below
NameNode Browse the web interface for the NameNode; by default it is available at
1 |
http:<em>//HadoopMaster:50070/</em> |
ResourceManager Browse the web interface for the ResourceManager; by default it is available at
1 |
http:<em>//HadoopMaster:8088/</em> |
Step 44 – Make the HDFS directories required to execute MapReduce jobs.
1 |
$ bin/hdfs dfs -mkdir /user |
1 |
$ bin/hdfs dfs -mkdir /user/hduser |
Step 45 – Copy the input files into the distributed filesystem.
1 |
$ hdfs dfs -put /usr/local/hadoop/etc/hadoop /user/hduser/input |
Step 46 – Run some of the examples provided.

1 |
$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-<strong>2.6</strong>.<strong>4</strong>.<strong>jar</strong> grep /user/hduser/input /user/hduser/output 'dfs[a-z.]+' |
Step 47 – Examine the output files.
1 |
$ hdfs dfs -cat /user/hduser/output/* |
Step 48 – Stop NameNode daemon and DataNode daemon.
1 |
$ stop-dfs.<strong>sh</strong> |
Step 49 – Stop Yarn daemons.
1 |
$ stop-yarn.<strong>sh</strong> |
OR
Instead of steps 48 and 49 you can use below command. It is deprecated now.
1 |
$ stop-all.<strong>sh</strong> |
+