9. Try 1st example
hduser@ubuntu:/usr/local/hadoop$ cd $HADOOP_PREFIX
hduser@ubuntu:/usr/local/hadoop$ hadoop jar hadoop-examples-1.0.4.jar pi 2 10
Number of Maps = 2
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Starting Job
13/04/03 15:01:40 INFO mapred.FileInputFormat: Total input paths to process : 2
13/04/03 15:01:41 INFO mapred.JobClient: Running job: job_201304031458_0003
13/04/03 15:01:42 INFO mapred.JobClient: map 0% reduce 0%
13/04/03 15:02:00 INFO mapred.JobClient: map 100% reduce 0%
13/04/03 15:02:15 INFO mapred.JobClient: map 100% reduce 100%
13/04/03 15:02:19 INFO mapred.JobClient: Job complete: job_201304031458_0003
13/04/03 15:02:19 INFO mapred.JobClient: Counters: 30
13/04/03 15:02:19 INFO mapred.JobClient: Job Counters
…
13/04/03 15:02:19 INFO mapred.JobClient: Reduce output records=0
13/04/03 15:02:19 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1118670848
13/04/03 15:02:19 INFO mapred.JobClient: Map output records=4
Job Finished in 39.148 seconds
Estimated value of Pi is 3.80000000000000000000
12. Configuring SSH
• Create SSH keys in the localhost
su - hduser
ssh-keygen -t rsa -P "“
• Put the key id_rsa.pub to localhost
touch ~/.ssh/authorized_keys && chmod 600
~/.ssh/authorized_keys
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
13. Configuration
• Edit the configuration
in /usr/local/hadoop/conf/hadoop-
env.sh, add following lines:
export JAVA_HOME=/usr/local/jdk
14. Configuration (cont.)
• Create a folder to store data for node
sudo mkdir -p /hadoop_data/name
sudo mkdir -p /hadoop_data/data
sudo mkdir -p /hadoop_data/temp
sudo chown hduser:hadoop /hadoop_data/name
sudo chown hduser:hadoop /hadoop_data/data
sudo chown hduser:hadoop /hadoop_data/temp
15. conf/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop_data/temp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
17. conf/hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<!-- Path to store namespace and transaction logs -->
<value>/hadoop_data/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<!-- Path to store data blocks in datanode -->
<value>/hadoop_data/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication. The actual number of replications can
be specified when the file is created. The default is used if replication is not
specified in create time.
</description>
</property>
</configuration>
18. Format a new system
notroot@ubuntu:/usr/local/hadoop/conf$ su - hduser
Password:
hduser@ubuntu:~$ /usr/local/hadoop/bin/hadoop namenode -format
13/04/03 13:41:24 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ubuntu.localdomain/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290;
compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
Re-format filesystem in /hadoop_data/name ? (Y or N) Y
13/04/03 13:41:26 INFO util.GSet: VM type = 32-bit
13/04/03 13:41:26 INFO util.GSet: 2% max memory = 19.33375 MB
13/04/03 13:41:26 INFO util.GSet: capacity = 2^22 = 4194304 entries
….
13/04/03 13:41:28 INFO common.Storage: Storage directory /hadoop_data/name has been successfully formatted.
13/04/03 13:41:28 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu.localdomain/127.0.1.1
************************************************************/
Do not format a running Hadoop file system as you will lose all the
data currently in the cluster (in HDFS)!
19. Start Single Node Cluster
hduser@ubuntu:~$ /usr/local/hadoop/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-
namenode-ubuntu.out
localhost: starting datanode, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-
hduser-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-
1.0.4/libexec/../logs/hadoop-hduser-secondarynamenode-ubuntu.out
starting jobtracker, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-
jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-
hduser-tasktracker-ubuntu.out
20. How to verify Hadoop process
• A nifty tool for checking whether the expected Hadoop processes are running is jps
(part of Sun JDK tool)
hduser@ubuntu:~$ jps
1203 NameNode
1833 Jps
1615 JobTracker
1541 SecondaryNameNode
1362 DataNode
1788 TaskTracker
• You can also check with netstat if Hadoop is listening on the configured ports.
notroot@ubuntu:/usr/local/hadoop/conf$ sudo netstat -plten | grep java
tcp 0 0 127.0.0.1:54310 0.0.0.0:* LISTEN 1001 7167 2438/java
tcp 0 0 127.0.0.1:54311 0.0.0.0:* LISTEN 1001 7949 2874/java
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 1001 7898 2791/java
tcp 0 0 0.0.0.0:50030 0.0.0.0:* LISTEN 1001 8035 2874/java
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1001 7202 2438/java
tcp 0 0 0.0.0.0:57143 0.0.0.0:* LISTEN 1001 7585 2791/java
tcp 0 0 0.0.0.0:41943 0.0.0.0:* LISTEN 1001 7222 2608/java
tcp 0 0 0.0.0.0:58936 0.0.0.0:* LISTEN 1001 6969 2438/java
tcp 0 0 127.0.0.1:50234 0.0.0.0:* LISTEN 1001 8158 3050/java
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 1001 7697 2608/java
tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 1001 7775 2608/java
tcp 0 0 0.0.0.0:40067 0.0.0.0:* LISTEN 1001 7764 2874/java
tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 1001 7939 2608/java
22. Running a MapReduce job
• We will use three ebooks from Project
Gutenberg for this example:
– The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson
– The Notebooks of Leonardo Da Vinci
– Ulysses by James Joyce
• Download each ebook as text files in Plain
Text UTF-8 encoding and store the files in
/tmp/gutenberg
In the preceding sample, MapReduce worked in the local mode without starting any servers and using the local filesystem as the storage system for inputs, outputs, and working data. The following diagram shows what happened in the WordCount program under the covers: