Call Girls Pune Just Call 9907093804 Top Class Call Girl Service Available
Huong dan cai dat hadoop
1. CAI DAT HADOOP
- Cai dat jre: $sudo apt-get install default-jre
- Cai dat jdk: $sudo apt-get install default-jdk
- Download hadoop va giai nen vao thu muc home.
- Xem phien ban java: $java -version
- Dat bien moi truong. Them vao file .bashrc: $ nano .bashrc
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-i386
export PATH=${JAVA_HOME}/bin:/home/student/hadoop/bin:${PATH}
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar
- Danh cac lenh:
$source .bashrc
$hadoop
$nano /etc/hosts : de doi ten may neu can
$hadoop
- Cai dat ssh va rsync
$sudo apt-get install ssh
$sudo apt-get install rsync
- Chay thu ung dung wordcount tren may cuc bo: tao file .txt danh vai tu cho ung
dung chay:
$mkdir input
$cd input
$ nano quynh.txt
$hadoop jar hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar
wordcount input output
$cat ouput/*
- Cai dat hadoop mapreduce tren mang:
+ Vao chinh sua cau hinh: $nano hadoop/etc/hadoop/core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
+ Chay lenh: $ssh localhost
+ Format namenode: $hdfs namenode -format
+ Start he thong: $hadoop/sbin/start-dfs.sh
T ạ o thư m ụ c ch ứ a d ữ li ệ u cho namenode và datanote
mkdir -p ~/hadoop/hdfs/namenode
mkdir -p ~/hadoop/hdfs/datanode
+ Vao chinh sua cau hinh: $nano hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/<username>/hadoop/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
2. <value>/home/<username>/hadoop/hdfs/datanode</value>
</property>
+ Start he thong: $hadoop/sbin/start-dfs.sh
+ Mo trinh duyet web go: localhost:50070
+ Tao thu muc data: $hdfs dfs –mkdir /data
+ Tao thu muc input trong data: $hdfs dfs –mkdir /data/input
+ Load file quynh.txt len input: $ hdfs dfs –put input/* /data/input
- Chay thu ung dung java tren mang:
+ Vao chinh sua cau hinh: $nano hadoop/etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
+ Vao chinh sua cau hinh: $nano hadoop/etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
+ Start yarn: $hadoop/sbin/start-yarn.sh
+ Mo trinh duyet web go: localhost:8088
+ Vao duong dan http://hadoop.apache.org/docs/r2.7.2/hadoop-mapreduce-
client/hadoop-mapreduce-client-core/MapReduceTutorial.html
+ Copy code WordCount va save voi ten WordCount.java
+ $ hadoop com.sun.tools.javac.Main WordCount.java
+ $ jar cf wc.jar WordCount*.class
+ Chay ung dung vua bien dich: $hadoop jar wc.jar WordCount /data/input /output
• trang huong dan http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-
common/SingleCluster.html
T ạ o ssh key
ssh-keygen -t rsa -P ""
●
Thêm pubic key vào fle authoriz_keys
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
●
Thêm và k ế t n ố I
eval `ssh-agent -s`
ssh-add
ssh localhost