SlideShare uma empresa Scribd logo
1 de 4
Single Node Setup
      Purpose
      Prerequisites
o     Supported Platforms
o     Required Software
o     Installing Software
      Download
      Prepare to Start the Hadoop Cluster
      Standalone Operation
      Pseudo-Distributed Operation
o     Configuration
o     Setup passphraseless ssh
o     Execution
      Fully-Distributed Operation


    Purpose
    This document describes how to set up and configure a single-node Hadoop installation so
    that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop
    Distributed File System (HDFS).


    Prerequisites
    Supported Platforms
    GNU/Linux is supported as a development and production platform. Hadoop has been
    demonstrated on GNU/Linux clusters with 2000 nodes.
    Win32 is supported as a development platform. Distributed operation has not been well
    tested on Win32, so it is not supported as a production platform.

    Required Software
    Required software for Linux and Windows include:

       1. JavaTM 1.6.x, preferably from Sun, must be installed.
       2. ssh must be installed and sshd must be running to use the Hadoop scripts that
          manage remote Hadoop daemons.
    Additional requirements for Windows include:

       1. Cygwin - Required for shell support in addition to the required software above.

    Installing Software
    If your cluster doesn't have the requisite software you will need to install it.

    For example on Ubuntu Linux:

    $ sudo apt-get install ssh
    $ sudo apt-get install rsync
On Windows, if you did not install the required software when you installed cygwin, start the
cygwin installer and select the packages:

openssh - the Net category


Download
To get a Hadoop distribution, download a recent stable release from one of the Apache
Download Mirrors.


Prepare to Start the Hadoop Cluster
Unpack the downloaded Hadoop distribution. In the distribution, edit the file conf/hadoop-
env.sh to define at least JAVA_HOME to be the root of your Java installation.

Try the following command:
$ bin/hadoop
This will display the usage documentation for the hadoop script.

Now you are ready to start your Hadoop cluster in one of the three supported modes:

Local (Standalone) Mode
Pseudo-Distributed Mode
Fully-Distributed Mode


Standalone Operation
By default, Hadoop is configured to run in a non-distributed mode, as a single Java process.
This is useful for debugging.

The following example copies the unpacked conf directory to use as input and then finds
and displays every match of the given regular expression. Output is written to the
given output directory.
$   mkdir input
$   cp conf/*.xml input
$   bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
$   cat output/*


Pseudo-Distributed Operation
Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop
daemon runs in a separate Java process.

Configuration
Use the following:

conf/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>


conf/hdfs-site.xml:

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>


conf/mapred-site.xml:

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>

Setup passphraseless ssh
Now check that you can ssh to the localhost without a passphrase:
$ ssh localhost

If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Execution
Format a new distributed-filesystem:
$ bin/hadoop namenode -format

Start the hadoop daemons:
$ bin/start-all.sh

The hadoop daemon log output is written to the ${HADOOP_LOG_DIR} directory (defaults
to ${HADOOP_HOME}/logs).

Browse the web interface for the NameNode and the JobTracker; by default they are
available at:

NameNode - http://localhost:50070/
JobTracker - http://localhost:50030/
Copy the input files into the distributed filesystem:
$ bin/hadoop fs -put conf input
Run some of the examples provided:
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'

Examine the output files:

Copy the output files from the distributed filesystem to the local filesytem and examine
them:
$ bin/hadoop fs -get output output
$ cat output/*

or

View the output files on the distributed filesystem:
$ bin/hadoop fs -cat output/*

When you're done, stop the daemons with:
$ bin/stop-all.sh


Fully-Distributed Operation
For information on setting up fully-distributed, non-trivial clusters see Cluster Setup.

Java and JNI are trademarks or registered trademarks of Sun Microsystems, Inc. in the
United States and other countries.

Mais conteúdo relacionado

Mais procurados

Apache HBase - Lab Assignment
Apache HBase - Lab AssignmentApache HBase - Lab Assignment
Apache HBase - Lab AssignmentFarzad Nozarian
 
openSUSE storage workshop 2016
openSUSE storage workshop 2016openSUSE storage workshop 2016
openSUSE storage workshop 2016Alex Lau
 
Docker 101, Alexander Ryabtsev
Docker 101, Alexander RyabtsevDocker 101, Alexander Ryabtsev
Docker 101, Alexander RyabtsevTetiana Saputo
 
swap process in red hat linux 7
swap process in red hat linux 7swap process in red hat linux 7
swap process in red hat linux 7MohamedJafar5
 
Redis persistence in practice
Redis persistence in practiceRedis persistence in practice
Redis persistence in practiceEugene Fidelin
 
Implementing Hadoop on a single cluster
Implementing Hadoop on a single clusterImplementing Hadoop on a single cluster
Implementing Hadoop on a single clusterSalil Navgire
 
Drush 5.0 (DrupalCamp LA 2012) - Chris Charlton
Drush 5.0 (DrupalCamp LA 2012) - Chris CharltonDrush 5.0 (DrupalCamp LA 2012) - Chris Charlton
Drush 5.0 (DrupalCamp LA 2012) - Chris CharltonChris Charlton
 
Performance analysis with_ceph
Performance analysis with_cephPerformance analysis with_ceph
Performance analysis with_cephAlex Lau
 
Docker Clustering - Batteries Included
Docker Clustering - Batteries IncludedDocker Clustering - Batteries Included
Docker Clustering - Batteries IncludedC4Media
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseAlex Lau
 
Hadoop cluster 安裝
Hadoop cluster 安裝Hadoop cluster 安裝
Hadoop cluster 安裝recast203
 
2015 DockerCon Using Docker in production at bity.com
2015 DockerCon Using Docker in production at bity.com2015 DockerCon Using Docker in production at bity.com
2015 DockerCon Using Docker in production at bity.comMathieu Buffenoir
 
Configure h base hadoop and hbase client
Configure h base hadoop and hbase clientConfigure h base hadoop and hbase client
Configure h base hadoop and hbase clientShashwat Shriparv
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Community
 
Docker use dockerfile
Docker use dockerfileDocker use dockerfile
Docker use dockerfilecawamata
 
Docker introduction
Docker introductionDocker introduction
Docker introductioncawamata
 

Mais procurados (20)

Dcp'14 drush
Dcp'14 drushDcp'14 drush
Dcp'14 drush
 
Apache HBase - Lab Assignment
Apache HBase - Lab AssignmentApache HBase - Lab Assignment
Apache HBase - Lab Assignment
 
openSUSE storage workshop 2016
openSUSE storage workshop 2016openSUSE storage workshop 2016
openSUSE storage workshop 2016
 
Docker 101, Alexander Ryabtsev
Docker 101, Alexander RyabtsevDocker 101, Alexander Ryabtsev
Docker 101, Alexander Ryabtsev
 
swap process in red hat linux 7
swap process in red hat linux 7swap process in red hat linux 7
swap process in red hat linux 7
 
Redis persistence in practice
Redis persistence in practiceRedis persistence in practice
Redis persistence in practice
 
Implementing Hadoop on a single cluster
Implementing Hadoop on a single clusterImplementing Hadoop on a single cluster
Implementing Hadoop on a single cluster
 
Drush 5.0 (DrupalCamp LA 2012) - Chris Charlton
Drush 5.0 (DrupalCamp LA 2012) - Chris CharltonDrush 5.0 (DrupalCamp LA 2012) - Chris Charlton
Drush 5.0 (DrupalCamp LA 2012) - Chris Charlton
 
Performance analysis with_ceph
Performance analysis with_cephPerformance analysis with_ceph
Performance analysis with_ceph
 
Docker Clustering - Batteries Included
Docker Clustering - Batteries IncludedDocker Clustering - Batteries Included
Docker Clustering - Batteries Included
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
 
Dbdeployer
DbdeployerDbdeployer
Dbdeployer
 
Hadoop cluster 安裝
Hadoop cluster 安裝Hadoop cluster 安裝
Hadoop cluster 安裝
 
2015 DockerCon Using Docker in production at bity.com
2015 DockerCon Using Docker in production at bity.com2015 DockerCon Using Docker in production at bity.com
2015 DockerCon Using Docker in production at bity.com
 
Configure h base hadoop and hbase client
Configure h base hadoop and hbase clientConfigure h base hadoop and hbase client
Configure h base hadoop and hbase client
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Drush
DrushDrush
Drush
 
Docker use dockerfile
Docker use dockerfileDocker use dockerfile
Docker use dockerfile
 
Docker introduction
Docker introductionDocker introduction
Docker introduction
 
CoreOS Overview
CoreOS OverviewCoreOS Overview
CoreOS Overview
 

Destaque

Hadoop Big Data Training (Part 1)
Hadoop Big Data Training (Part 1)Hadoop Big Data Training (Part 1)
Hadoop Big Data Training (Part 1)Allied Consultants
 
Hadoop basic commands
Hadoop basic commandsHadoop basic commands
Hadoop basic commandsbispsolutions
 
Installing Software, Part 1 - Repositories
Installing Software, Part 1 - RepositoriesInstalling Software, Part 1 - Repositories
Installing Software, Part 1 - RepositoriesKevin OBrien
 
Influences and Intentions by Deanna Hammond-Blackburn
Influences and Intentions by Deanna Hammond-BlackburnInfluences and Intentions by Deanna Hammond-Blackburn
Influences and Intentions by Deanna Hammond-Blackburndehammondhall
 
Media back cover Evaluation
Media back cover EvaluationMedia back cover Evaluation
Media back cover Evaluationdehammondhall
 
Media presentation Powerpoint
Media presentation PowerpointMedia presentation Powerpoint
Media presentation Powerpointdehammondhall
 
Target Audience Feedback Questionnaire Analysis
Target Audience Feedback Questionnaire AnalysisTarget Audience Feedback Questionnaire Analysis
Target Audience Feedback Questionnaire Analysisdehammondhall
 
Influences and Intentions
Influences and IntentionsInfluences and Intentions
Influences and Intentionsdehammondhall
 
Influences and intentions - Alex Major
Influences and intentions - Alex Major Influences and intentions - Alex Major
Influences and intentions - Alex Major dehammondhall
 
kenwood bm450 manual
kenwood bm450 manualkenwood bm450 manual
kenwood bm450 manualmenippos
 
[2016/01] 4. 查詢與檢索功能
[2016/01] 4. 查詢與檢索功能[2016/01] 4. 查詢與檢索功能
[2016/01] 4. 查詢與檢索功能airitilibrary_guide
 

Destaque (20)

Hadoop Big Data Training (Part 1)
Hadoop Big Data Training (Part 1)Hadoop Big Data Training (Part 1)
Hadoop Big Data Training (Part 1)
 
Hadoop basic commands
Hadoop basic commandsHadoop basic commands
Hadoop basic commands
 
Ubuntu 10.10
Ubuntu 10.10Ubuntu 10.10
Ubuntu 10.10
 
Ubuntu installation
Ubuntu installationUbuntu installation
Ubuntu installation
 
Installing Software, Part 1 - Repositories
Installing Software, Part 1 - RepositoriesInstalling Software, Part 1 - Repositories
Installing Software, Part 1 - Repositories
 
Influences and Intentions by Deanna Hammond-Blackburn
Influences and Intentions by Deanna Hammond-BlackburnInfluences and Intentions by Deanna Hammond-Blackburn
Influences and Intentions by Deanna Hammond-Blackburn
 
Media back cover Evaluation
Media back cover EvaluationMedia back cover Evaluation
Media back cover Evaluation
 
Media presentation Powerpoint
Media presentation PowerpointMedia presentation Powerpoint
Media presentation Powerpoint
 
Evaluation 2
Evaluation 2Evaluation 2
Evaluation 2
 
Target Audience Feedback Questionnaire Analysis
Target Audience Feedback Questionnaire AnalysisTarget Audience Feedback Questionnaire Analysis
Target Audience Feedback Questionnaire Analysis
 
Solaris - nw
Solaris  - nwSolaris  - nw
Solaris - nw
 
Influences and Intentions
Influences and IntentionsInfluences and Intentions
Influences and Intentions
 
Influences and intentions - Alex Major
Influences and intentions - Alex Major Influences and intentions - Alex Major
Influences and intentions - Alex Major
 
[2016/01] 5. 個人書目管理
[2016/01] 5. 個人書目管理[2016/01] 5. 個人書目管理
[2016/01] 5. 個人書目管理
 
kenwood bm450 manual
kenwood bm450 manualkenwood bm450 manual
kenwood bm450 manual
 
Cultura ciudadana
Cultura ciudadanaCultura ciudadana
Cultura ciudadana
 
Hadoop File System Shell Commands,
Hadoop File System Shell Commands,Hadoop File System Shell Commands,
Hadoop File System Shell Commands,
 
[2016/01] 4. 查詢與檢索功能
[2016/01] 4. 查詢與檢索功能[2016/01] 4. 查詢與檢索功能
[2016/01] 4. 查詢與檢索功能
 
[2016/01] 3. 儲值與購物車
[2016/01] 3. 儲值與購物車[2016/01] 3. 儲值與購物車
[2016/01] 3. 儲值與購物車
 
Intro To Hadoop
Intro To HadoopIntro To Hadoop
Intro To Hadoop
 

Semelhante a Single node setup

02 Hadoop deployment and configuration
02 Hadoop deployment and configuration02 Hadoop deployment and configuration
02 Hadoop deployment and configurationSubhas Kumar Ghosh
 
Single node hadoop cluster installation
Single node hadoop cluster installation Single node hadoop cluster installation
Single node hadoop cluster installation Mahantesh Angadi
 
Hadoop installation on windows
Hadoop installation on windows Hadoop installation on windows
Hadoop installation on windows habeebulla g
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopR hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopAiden Seonghak Hong
 
Hadoop Installation
Hadoop InstallationHadoop Installation
Hadoop InstallationAhmed Salman
 
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)Nag Arvind Gudiseva
 
Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14jijukjoseph
 
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter SlidesJuly 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter Slidesryancox
 
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...Titus Damaiyanti
 
Configuring and manipulating HDFS files
Configuring and manipulating HDFS filesConfiguring and manipulating HDFS files
Configuring and manipulating HDFS filesRupak Roy
 
Hadoop single node setup
Hadoop single node setupHadoop single node setup
Hadoop single node setupMohammad_Tariq
 
Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6Manish Chopra
 
Professional deployment
Professional deploymentProfessional deployment
Professional deploymentIvelina Dimova
 
Apache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exerciseApache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exerciseShiva Rama Krishna Dasharathi
 
Mahout Workshop on Google Cloud Platform
Mahout Workshop on Google Cloud PlatformMahout Workshop on Google Cloud Platform
Mahout Workshop on Google Cloud PlatformIMC Institute
 

Semelhante a Single node setup (20)

02 Hadoop deployment and configuration
02 Hadoop deployment and configuration02 Hadoop deployment and configuration
02 Hadoop deployment and configuration
 
Single node hadoop cluster installation
Single node hadoop cluster installation Single node hadoop cluster installation
Single node hadoop cluster installation
 
Hadoop installation on windows
Hadoop installation on windows Hadoop installation on windows
Hadoop installation on windows
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopR hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing Hadoop
 
Hadoop 2.4 installing on ubuntu 14.04
Hadoop 2.4 installing on ubuntu 14.04Hadoop 2.4 installing on ubuntu 14.04
Hadoop 2.4 installing on ubuntu 14.04
 
Hadoop Installation
Hadoop InstallationHadoop Installation
Hadoop Installation
 
Exp-3.pptx
Exp-3.pptxExp-3.pptx
Exp-3.pptx
 
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
 
Hadoop completereference
Hadoop completereferenceHadoop completereference
Hadoop completereference
 
Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14
 
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter SlidesJuly 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
 
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
 
Unit 5
Unit  5Unit  5
Unit 5
 
Configuring and manipulating HDFS files
Configuring and manipulating HDFS filesConfiguring and manipulating HDFS files
Configuring and manipulating HDFS files
 
Hadoop single node setup
Hadoop single node setupHadoop single node setup
Hadoop single node setup
 
RHive tutorial - Installation
RHive tutorial - InstallationRHive tutorial - Installation
RHive tutorial - Installation
 
Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6
 
Professional deployment
Professional deploymentProfessional deployment
Professional deployment
 
Apache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exerciseApache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exercise
 
Mahout Workshop on Google Cloud Platform
Mahout Workshop on Google Cloud PlatformMahout Workshop on Google Cloud Platform
Mahout Workshop on Google Cloud Platform
 

Single node setup

  • 1. Single Node Setup Purpose Prerequisites o Supported Platforms o Required Software o Installing Software Download Prepare to Start the Hadoop Cluster Standalone Operation Pseudo-Distributed Operation o Configuration o Setup passphraseless ssh o Execution Fully-Distributed Operation Purpose This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS). Prerequisites Supported Platforms GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so it is not supported as a production platform. Required Software Required software for Linux and Windows include: 1. JavaTM 1.6.x, preferably from Sun, must be installed. 2. ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons. Additional requirements for Windows include: 1. Cygwin - Required for shell support in addition to the required software above. Installing Software If your cluster doesn't have the requisite software you will need to install it. For example on Ubuntu Linux: $ sudo apt-get install ssh $ sudo apt-get install rsync
  • 2. On Windows, if you did not install the required software when you installed cygwin, start the cygwin installer and select the packages: openssh - the Net category Download To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors. Prepare to Start the Hadoop Cluster Unpack the downloaded Hadoop distribution. In the distribution, edit the file conf/hadoop- env.sh to define at least JAVA_HOME to be the root of your Java installation. Try the following command: $ bin/hadoop This will display the usage documentation for the hadoop script. Now you are ready to start your Hadoop cluster in one of the three supported modes: Local (Standalone) Mode Pseudo-Distributed Mode Fully-Distributed Mode Standalone Operation By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging. The following example copies the unpacked conf directory to use as input and then finds and displays every match of the given regular expression. Output is written to the given output directory. $ mkdir input $ cp conf/*.xml input $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+' $ cat output/* Pseudo-Distributed Operation Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process. Configuration Use the following: conf/core-site.xml:
  • 3. <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> conf/hdfs-site.xml: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> conf/mapred-site.xml: <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration> Setup passphraseless ssh Now check that you can ssh to the localhost without a passphrase: $ ssh localhost If you cannot ssh to localhost without a passphrase, execute the following commands: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Execution Format a new distributed-filesystem: $ bin/hadoop namenode -format Start the hadoop daemons: $ bin/start-all.sh The hadoop daemon log output is written to the ${HADOOP_LOG_DIR} directory (defaults to ${HADOOP_HOME}/logs). Browse the web interface for the NameNode and the JobTracker; by default they are available at: NameNode - http://localhost:50070/ JobTracker - http://localhost:50030/ Copy the input files into the distributed filesystem: $ bin/hadoop fs -put conf input
  • 4. Run some of the examples provided: $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+' Examine the output files: Copy the output files from the distributed filesystem to the local filesytem and examine them: $ bin/hadoop fs -get output output $ cat output/* or View the output files on the distributed filesystem: $ bin/hadoop fs -cat output/* When you're done, stop the daemons with: $ bin/stop-all.sh Fully-Distributed Operation For information on setting up fully-distributed, non-trivial clusters see Cluster Setup. Java and JNI are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.