SlideShare uma empresa Scribd logo
1 de 51
1. Introduction: Hadoop’s history and advantages
2. Architecture in detail
3. Hadoop in industry
 Apache top level project, open-source
implementation of frameworks for reliable,
scalable, distributed computing and data
storage.
 It is a flexible and highly-available
architecture for large scale computation and
data processing on a network of commodity
hardware.
 Designed to answer the question: “How
to process big data with reasonable
cost and time?”
1996
1996
1997
1996
1998
2013
Doug Cutting
2005: Doug Cutting and Michael J. Cafarella developed
Hadoop to support distribution for the Nutch search
engine project.
The project was funded by Yahoo.
2006: Yahoo gave the project to Apache
Software Foundation.
2003
2004
2006
• 2008 - Hadoop Wins Terabyte Sort Benchmark (sorted 1
terabyte of data in 209 seconds, compared to previous record
of 297 seconds)
• 2009 - Avro and Chukwa became new members of Hadoop
Framework family
• 2010 - Hadoop's Hbase, Hive and Pig subprojects completed,
adding more computational power to Hadoop framework
• 2011 - ZooKeeper Completed
• 2013 - Hadoop 1.1.2 and Hadoop 2.0.3 alpha.
- Ambari, Cassandra, Mahout have been added
• Hadoop:
• an open-source software framework that supports data-
intensive distributed applications, licensed under the
Apache v2 license.
• Goals / Requirements:
• Abstract and facilitate the storage and processing of
large and/or rapidly growing data sets
• Structured and non-structured data
• Simple programming models
• High scalability and availability
• Use commodity (cheap!) hardware with little redundancy
• Fault-tolerance
• Move computation rather than data
• Distributed, with some centralization
• Main nodes of cluster are where most of the computational power
and storage of the system lies
• Main nodes run TaskTracker to accept and reply to MapReduce
tasks, and also DataNode to store needed blocks closely as
possible
• Central control node runs NameNode to keep track of HDFS
directories & files, and JobTracker to dispatch compute tasks to
TaskTracker
• Written in Java, also supports Python and Ruby
• Hadoop Distributed Filesystem
• Tailored to needs of MapReduce
• Targeted towards many reads of filestreams
• Writes are more costly
• High degree of data replication (3x by default)
• No need for RAID on normal nodes
• Large blocksize (64MB)
• Location awareness of DataNodes in network
NameNode:
• Stores metadata for the files, like the directory structure of a
typical FS.
• The server holding the NameNode instance is quite crucial,
as there is only one.
• Transaction log for file deletes/adds, etc. Does not use
transactions for whole blocks or file-streams, only metadata.
• Handles creation of more replica blocks when necessary
after a DataNode failure
DataNode:
• Stores the actual data in HDFS
• Can run on any underlying filesystem (ext3/4, NTFS, etc)
• Notifies NameNode of what blocks it has
• NameNode replicates blocks 2x in local rack, 1x elsewhere
MapReduce Engine:
• JobTracker & TaskTracker
• JobTracker splits up data into smaller tasks(“Map”) and
sends it to the TaskTracker process in each node
• TaskTracker reports back to the JobTracker node and
reports on job progress, sends data (“Reduce”) or requests
new jobs
• None of these components are necessarily limited to using
HDFS
• Many other distributed file-systems with quite different
architectures work
• Many other software packages besides Hadoop's
MapReduce platform make use of HDFS
• Hadoop is in use at most organizations that handle big data:
o Yahoo!
o Facebook
o Amazon
o Netflix
o Etc…
• Some examples of scale:
o Yahoo!’s Search Webmap runs on 10,000 core Linux
cluster and powers Yahoo! Web search
o FB’s Hadoop cluster hosts 100+ PB of data (July, 2012)
& growing at ½ PB/day (Nov, 2012)
• Advertisement (Mining user behavior to generate
recommendations)
• Searches (group related documents)
• Security (search for uncommon patterns)
Three main applications of Hadoop:
• Non-realtime large dataset computing:
o NY Times was dynamically generating PDFs of articles
from 1851-1922
o Wanted to pre-generate & statically serve articles to
improve performance
o Using Hadoop + MapReduce running on EC2 / S3,
converted 4TB of TIFFs into 11 million PDF articles in
24 hrs
• Design requirements:
o Integrate display of email, SMS and
chat messages between pairs and
groups of users
o Strong control over who users
receive messages from
o Suited for production use between
500 million people immediately after
launch
o Stringent latency & uptime
requirements
• System requirements
o High write throughput
o Cheap, elastic storage
o Low latency
o High consistency (within a
single data center good
enough)
o Disk-efficient sequential
and random read
performance
• Classic alternatives
o These requirements typically met using large MySQL cluster &
caching tiers using Memcached
o Content on HDFS could be loaded into MySQL or Memcached
if needed by web tier
• Problems with previous solutions
o MySQL has low random write throughput… BIG problem for
messaging!
o Difficult to scale MySQL clusters rapidly while maintaining
performance
o MySQL clusters have high management overhead, require
more expensive hardware
• Facebook’s solution
o Hadoop + HBase as foundations
o Improve & adapt HDFS and HBase to scale to FB’s workload
and operational considerations
 Major concern was availability: NameNode is SPOF &
failover times are at least 20 minutes
 Proprietary “AvatarNode”: eliminates SPOF, makes HDFS
safe to deploy even with 24/7 uptime requirement
 Performance improvements for realtime workload: RPC
timeout. Rather fail fast and try a different DataNode
 Distributed File System
 Fault Tolerance
 Open Data Format
 Flexible Schema
 Queryable Database
 Need to process Multi Petabyte Datasets
 Data may not have strict schema
 Expensive to build reliability in each
application
 Nodes fails everyday
 Need common infrastructure
 Very Large Distributed File System
 Assumes Commodity Hardware
 Optimized for Batch Processing
 Runs on heterogeneous OS
 A Block Sever
◦ Stores data in local file system
◦ Stores meta-data of a block - checksum
◦ Serves data and meta-data to clients
 Block Report
◦ Periodically sends a report of all existing
blocks to NameNode
 Facilitate Pipelining of Data
◦ Forwards data to other specified DataNodes
 Replication Strategy
◦ One replica on local node
◦ Second replica on a remote rack
◦ Third replica on same remote rack
◦ Additional replicas are randomly placed
 Clients read from nearest replica
 Use Checksums to validate data – CRC32
 File Creation
◦ Client computes checksum per 512 byte
◦ DataNode stores the checksum
 File Access
◦ Client retrieves the data and checksum from
DataNode
◦ If validation fails, client tries other replicas
 Client retrieves a list of DataNodes on which
to place replicas of a block
 Client writes block to the first DataNode
 The first DataNode forwards the data to the
next DataNode in the Pipeline
 When all replicas are written, the client moves
on to write the next block in file
 MapReduce programming model
◦ Framework for distributed processing of large data
sets
◦ Pluggable user code runs in generic framework
 Common design pattern in data processing
◦ cat * | grep | sort | uniq -c | cat > file
◦ input | map | shuffle | reduce | output
 Log processing
 Web search indexing
 Ad-hoc queries
 MapReduce Component
◦ JobClient
◦ JobTracker
◦ TaskTracker
◦ Child
 Job Creation/Execution Process
 JobClient
◦ Submit job
 JobTracker
◦ Manage and schedule job, split job into tasks
 TaskTracker
◦ Start and monitor the task execution
 Child
◦ The process that really execute the task
 Protocol
◦ JobClient <-------------> JobTracker
◦ TaskTracker <------------> JobTracker
◦ TaskTracker <-------------> Child
 JobTracker impliments both protocol and works
as server in both IPC
 TaskTracker implements the
TaskUmbilicalProtocol; Child gets task
information and reports task status through it.
JobSubmissionProtocol
InterTrackerProtocol
TaskUmbilicalProtocol
 Check input and output, e.g. check if the
output directory is already existing
◦ job.getInputFormat().validateInput(job);
◦ job.getOutputFormat().checkOutputSpecs(fs, job);
 Get InputSplits, sort, and write output to
HDFS
◦ InputSplit[] splits = job.getInputFormat().
getSplits(job, job.getNumMapTasks());
◦ writeSplitsFile(splits, out); // out is
$SYSTEMDIR/$JOBID/job.split
 The jar file and configuration file will be
uploaded to HDFS system directory
◦ job.write(out); // out is $SYSTEMDIR/$JOBID/job.xml
 JobStatus status =
jobSubmitClient.submitJob(jobId);
◦ This is an RPC invocation, jobSubmitClient is a
proxy created in the initialization
 JobTracker.submitJob(jobID) <-- receive RPC
invocation request
 JobInProgress job = new JobInProgress(jobId,
this, this.conf)
 Add the job into Job Queue
◦ jobs.put(job.getProfile().getJobId(), job);
◦ jobsByPriority.add(job);
◦ jobInitQueue.add(job);
 Sort by priority
◦ resortPriority();
◦ compare the JobPrioity first, then compare the
JobSubmissionTime
 Wake JobInitThread
◦ jobInitQueue.notifyall();
◦ job = jobInitQueue.remove(0);
◦ job.initTasks();
 JobInProgress(String jobid, JobTracker
jobtracker, JobConf default_conf);
 JobInProgress.initTasks()
◦ DataInputStream splitFile = fs.open(new
Path(conf.get(“mapred.job.split.file”)));
// mapred.job.split.file -->
$SYSTEMDIR/$JOBID/job.split
 splits = JobClient.readSplitFile(splitFile);
 numMapTasks = splits.length;
 maps[i] = new TaskInProgress(jobId, jobFile,
splits[i], jobtracker, conf, this, i);
 reduces[i] = new TaskInProgress(jobId,
jobFile, splits[i], jobtracker, conf, this, i);
 JobStatus --> JobStatus.RUNNING
 Task getNewTaskForTaskTracker(String
taskTracker)
 Compute the maximum tasks that can be
running on taskTracker
◦ int maxCurrentMap Tasks = tts.getMaxMapTasks();
◦ int maxMapLoad = Math.min(maxCurrentMapTasks,
(int)Math.ceil(double)
remainingMapLoad/numTaskTrackers));
 int numMaps = tts.countMapTasks(); //
running tasks number
 If numMaps < maxMapLoad, then more tasks
can be allocated, then based on priority, pick
the first job from the jobsByPriority Queue,
create a task, and return to TaskTracker
◦ Task t = job.obtainNewMapTask(tts,
numTaskTrackers);
 initialize()
◦ Remove original local directory
◦ RPC initialization
 TaskReportServer = RPC.getServer(this, bindAddress,
tmpPort, max, false, this, fConf);
 InterTrackerProtocol jobClient = (InterTrackerProtocol)
RPC.waitForProxy(InterTrackerProtocol.class,
InterTrackerProtocol.versionID, jobTrackAddr,
this.fConf);
 run();
 offerService();
 TaskTracker talks to JobTracker with
HeartBeat message periodically
◦ HeatbeatResponse heartbeatResponse =
transmitHeartBeat();
 TaskTracker.localizeJob(TaskInProgress tip);
 launchTasksForJob(tip, new
JobConf(rjob.jobFile));
◦ tip.launchTask(); // TaskTracker.TaskInProgress
◦ tip.localizeTask(task); // create folder, symbol link
◦ runner = task.createRunner(TaskTracker.this);
◦ runner.start(); // start TaskRunner thread
 TaskRunner.run();
◦ Configure child process’ jvm parameters, i.e.
classpath, taskid, taskReportServer’s address & port
◦ Start Child Process
 runChild(wrappedCommand, workDir, taskid);
 Create RPC Proxy, and execute RPC invocation
◦ TaskUmbilicalProtocol umbilical =
(TaskUmbilicalProtocol)
RPC.getProxy(TaskUmbilicalProtocol.class,
TaskUmbilicalProtocol.versionID, address,
defaultConf);
◦ Task task = umbilical.getTask(taskid);
 task.run(); // mapTask / reduceTask.run

Mais conteúdo relacionado

Mais procurados

Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and HadoopFlavio Vit
 
Apache Hadoop
Apache HadoopApache Hadoop
Apache HadoopAjit Koti
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hari Shankar Sreekumar
 
Hadoop demo ppt
Hadoop demo pptHadoop demo ppt
Hadoop demo pptPhil Young
 
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability | Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability | Edureka!
 
Hadoop File system (HDFS)
Hadoop File system (HDFS)Hadoop File system (HDFS)
Hadoop File system (HDFS)Prashant Gupta
 
Overview of Hadoop and HDFS
Overview of Hadoop and HDFSOverview of Hadoop and HDFS
Overview of Hadoop and HDFSBrendan Tierney
 
Apache Hadoop - Big Data Engineering
Apache Hadoop - Big Data EngineeringApache Hadoop - Big Data Engineering
Apache Hadoop - Big Data EngineeringBADR
 
Hadoop scalability
Hadoop scalabilityHadoop scalability
Hadoop scalabilityWANdisco Plc
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryCloudera, Inc.
 
Seminar_Report_hadoop
Seminar_Report_hadoopSeminar_Report_hadoop
Seminar_Report_hadoopVarun Narang
 
Supporting Financial Services with a More Flexible Approach to Big Data
Supporting Financial Services with a More Flexible Approach to Big DataSupporting Financial Services with a More Flexible Approach to Big Data
Supporting Financial Services with a More Flexible Approach to Big DataWANdisco Plc
 
Apache Hadoop In Theory And Practice
Apache Hadoop In Theory And PracticeApache Hadoop In Theory And Practice
Apache Hadoop In Theory And PracticeAdam Kawa
 
Understanding Big Data And Hadoop
Understanding Big Data And HadoopUnderstanding Big Data And Hadoop
Understanding Big Data And HadoopEdureka!
 
Overview of Big data, Hadoop and Microsoft BI - version1
Overview of Big data, Hadoop and Microsoft BI - version1Overview of Big data, Hadoop and Microsoft BI - version1
Overview of Big data, Hadoop and Microsoft BI - version1Thanh Nguyen
 
Big Data and Hadoop Introduction
 Big Data and Hadoop Introduction Big Data and Hadoop Introduction
Big Data and Hadoop IntroductionDzung Nguyen
 
Apache hadoop technology : Beginners
Apache hadoop technology : BeginnersApache hadoop technology : Beginners
Apache hadoop technology : BeginnersShweta Patnaik
 

Mais procurados (20)

Hadoop Overview kdd2011
Hadoop Overview kdd2011Hadoop Overview kdd2011
Hadoop Overview kdd2011
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
 
Apache Hadoop
Apache HadoopApache Hadoop
Apache Hadoop
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
 
Hadoop demo ppt
Hadoop demo pptHadoop demo ppt
Hadoop demo ppt
 
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability | Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
 
Hadoop File system (HDFS)
Hadoop File system (HDFS)Hadoop File system (HDFS)
Hadoop File system (HDFS)
 
Hadoop
Hadoop Hadoop
Hadoop
 
Hadoop
HadoopHadoop
Hadoop
 
Overview of Hadoop and HDFS
Overview of Hadoop and HDFSOverview of Hadoop and HDFS
Overview of Hadoop and HDFS
 
Apache Hadoop - Big Data Engineering
Apache Hadoop - Big Data EngineeringApache Hadoop - Big Data Engineering
Apache Hadoop - Big Data Engineering
 
Hadoop scalability
Hadoop scalabilityHadoop scalability
Hadoop scalability
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster Recovery
 
Seminar_Report_hadoop
Seminar_Report_hadoopSeminar_Report_hadoop
Seminar_Report_hadoop
 
Supporting Financial Services with a More Flexible Approach to Big Data
Supporting Financial Services with a More Flexible Approach to Big DataSupporting Financial Services with a More Flexible Approach to Big Data
Supporting Financial Services with a More Flexible Approach to Big Data
 
Apache Hadoop In Theory And Practice
Apache Hadoop In Theory And PracticeApache Hadoop In Theory And Practice
Apache Hadoop In Theory And Practice
 
Understanding Big Data And Hadoop
Understanding Big Data And HadoopUnderstanding Big Data And Hadoop
Understanding Big Data And Hadoop
 
Overview of Big data, Hadoop and Microsoft BI - version1
Overview of Big data, Hadoop and Microsoft BI - version1Overview of Big data, Hadoop and Microsoft BI - version1
Overview of Big data, Hadoop and Microsoft BI - version1
 
Big Data and Hadoop Introduction
 Big Data and Hadoop Introduction Big Data and Hadoop Introduction
Big Data and Hadoop Introduction
 
Apache hadoop technology : Beginners
Apache hadoop technology : BeginnersApache hadoop technology : Beginners
Apache hadoop technology : Beginners
 

Destaque

Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.
Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.
Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.Lemberg Solutions
 
Avances tecnológicos que cambiarán el mundo
Avances tecnológicos que cambiarán el mundoAvances tecnológicos que cambiarán el mundo
Avances tecnológicos que cambiarán el mundoAlexander Viteri
 
Oral bac anglais conclusion mythes héros.png
Oral bac anglais conclusion mythes héros.pngOral bac anglais conclusion mythes héros.png
Oral bac anglais conclusion mythes héros.pngNosdevoirs
 
La Escuela y sus voces. Narrativas pedagógicas de docentes mendocinos
La Escuela y sus voces. Narrativas pedagógicas de docentes mendocinosLa Escuela y sus voces. Narrativas pedagógicas de docentes mendocinos
La Escuela y sus voces. Narrativas pedagógicas de docentes mendocinosDirección General de Escuelas Mendoza
 
Tecnicas de evaluacion
Tecnicas de evaluacionTecnicas de evaluacion
Tecnicas de evaluacionAngie Lara
 
4.2 binomial distributions
4.2 binomial distributions4.2 binomial distributions
4.2 binomial distributionstintinsan
 
Rough cut feedback
Rough cut feedbackRough cut feedback
Rough cut feedbackmollyturrell
 
international kite festival and snake boat race festival
international kite festival and snake boat race festivalinternational kite festival and snake boat race festival
international kite festival and snake boat race festivalSneh Parmar
 
'What is iBeacon?' by Roman Malinovskyi
'What is iBeacon?' by Roman Malinovskyi'What is iBeacon?' by Roman Malinovskyi
'What is iBeacon?' by Roman MalinovskyiLemberg Solutions
 
Enfermedades pulmonares
Enfermedades pulmonaresEnfermedades pulmonares
Enfermedades pulmonaresDavid Paucar
 

Destaque (19)

FDI
FDIFDI
FDI
 
Godrej Infotech Profile
Godrej Infotech ProfileGodrej Infotech Profile
Godrej Infotech Profile
 
Article review
Article reviewArticle review
Article review
 
Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.
Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.
Common iPhone Mistakes. An Efficient Guide for QA's and iOS Developers.
 
Target audience
Target audienceTarget audience
Target audience
 
Avances tecnológicos que cambiarán el mundo
Avances tecnológicos que cambiarán el mundoAvances tecnológicos que cambiarán el mundo
Avances tecnológicos que cambiarán el mundo
 
Hadoop training in bangalore
Hadoop training in bangaloreHadoop training in bangalore
Hadoop training in bangalore
 
LLN NovDec 2015
LLN NovDec 2015LLN NovDec 2015
LLN NovDec 2015
 
Oral bac anglais conclusion mythes héros.png
Oral bac anglais conclusion mythes héros.pngOral bac anglais conclusion mythes héros.png
Oral bac anglais conclusion mythes héros.png
 
La Escuela y sus voces. Narrativas pedagógicas de docentes mendocinos
La Escuela y sus voces. Narrativas pedagógicas de docentes mendocinosLa Escuela y sus voces. Narrativas pedagógicas de docentes mendocinos
La Escuela y sus voces. Narrativas pedagógicas de docentes mendocinos
 
Tecnicas de evaluacion
Tecnicas de evaluacionTecnicas de evaluacion
Tecnicas de evaluacion
 
4.2 binomial distributions
4.2 binomial distributions4.2 binomial distributions
4.2 binomial distributions
 
Terminale - Carte Mentale - Espaces et Echanges
Terminale - Carte Mentale - Espaces et Echanges Terminale - Carte Mentale - Espaces et Echanges
Terminale - Carte Mentale - Espaces et Echanges
 
La documentacion narrativa de experiencias pedagogicas
La documentacion narrativa de experiencias pedagogicasLa documentacion narrativa de experiencias pedagogicas
La documentacion narrativa de experiencias pedagogicas
 
Rough cut feedback
Rough cut feedbackRough cut feedback
Rough cut feedback
 
international kite festival and snake boat race festival
international kite festival and snake boat race festivalinternational kite festival and snake boat race festival
international kite festival and snake boat race festival
 
'What is iBeacon?' by Roman Malinovskyi
'What is iBeacon?' by Roman Malinovskyi'What is iBeacon?' by Roman Malinovskyi
'What is iBeacon?' by Roman Malinovskyi
 
Amul_FinalReport_PDF
Amul_FinalReport_PDFAmul_FinalReport_PDF
Amul_FinalReport_PDF
 
Enfermedades pulmonares
Enfermedades pulmonaresEnfermedades pulmonares
Enfermedades pulmonares
 

Semelhante a Hadoop introduction

HDFS_architecture.ppt
HDFS_architecture.pptHDFS_architecture.ppt
HDFS_architecture.pptvijayapraba1
 
Hadoop Distributed File System
Hadoop Distributed File SystemHadoop Distributed File System
Hadoop Distributed File Systemelliando dias
 
Cloud computing UNIT 2.1 presentation in
Cloud computing UNIT 2.1 presentation inCloud computing UNIT 2.1 presentation in
Cloud computing UNIT 2.1 presentation inRahulBhole12
 
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.MaharajothiP
 
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDYVenneladonthireddy1
 
Hadoop-Quick introduction
Hadoop-Quick introductionHadoop-Quick introduction
Hadoop-Quick introductionSandeep Singh
 
Borthakur hadoop univ-research
Borthakur hadoop univ-researchBorthakur hadoop univ-research
Borthakur hadoop univ-researchsaintdevil163
 
Colorado Springs Open Source Hadoop/MySQL
Colorado Springs Open Source Hadoop/MySQL Colorado Springs Open Source Hadoop/MySQL
Colorado Springs Open Source Hadoop/MySQL David Smelker
 
Big Data Architecture Workshop - Vahid Amiri
Big Data Architecture Workshop -  Vahid AmiriBig Data Architecture Workshop -  Vahid Amiri
Big Data Architecture Workshop - Vahid Amiridatastack
 
Big data processing using hadoop poster presentation
Big data processing using hadoop poster presentationBig data processing using hadoop poster presentation
Big data processing using hadoop poster presentationAmrut Patil
 
Scalable and High available Distributed File System Metadata Service Using gR...
Scalable and High available Distributed File System Metadata Service Using gR...Scalable and High available Distributed File System Metadata Service Using gR...
Scalable and High available Distributed File System Metadata Service Using gR...Alluxio, Inc.
 
Big Data Unit 4 - Hadoop
Big Data Unit 4 - HadoopBig Data Unit 4 - Hadoop
Big Data Unit 4 - HadoopRojaT4
 

Semelhante a Hadoop introduction (20)

Introduction to Hadoop Administration
Introduction to Hadoop AdministrationIntroduction to Hadoop Administration
Introduction to Hadoop Administration
 
Hadoop ppt1
Hadoop ppt1Hadoop ppt1
Hadoop ppt1
 
List of Engineering Colleges in Uttarakhand
List of Engineering Colleges in UttarakhandList of Engineering Colleges in Uttarakhand
List of Engineering Colleges in Uttarakhand
 
Hadoop.pptx
Hadoop.pptxHadoop.pptx
Hadoop.pptx
 
Hadoop.pptx
Hadoop.pptxHadoop.pptx
Hadoop.pptx
 
Introduction to Hadoop Administration
Introduction to Hadoop AdministrationIntroduction to Hadoop Administration
Introduction to Hadoop Administration
 
HDFS_architecture.ppt
HDFS_architecture.pptHDFS_architecture.ppt
HDFS_architecture.ppt
 
Hadoop Distributed File System
Hadoop Distributed File SystemHadoop Distributed File System
Hadoop Distributed File System
 
Cloud computing UNIT 2.1 presentation in
Cloud computing UNIT 2.1 presentation inCloud computing UNIT 2.1 presentation in
Cloud computing UNIT 2.1 presentation in
 
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
 
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
 
Hadoop-Quick introduction
Hadoop-Quick introductionHadoop-Quick introduction
Hadoop-Quick introduction
 
Borthakur hadoop univ-research
Borthakur hadoop univ-researchBorthakur hadoop univ-research
Borthakur hadoop univ-research
 
Colorado Springs Open Source Hadoop/MySQL
Colorado Springs Open Source Hadoop/MySQL Colorado Springs Open Source Hadoop/MySQL
Colorado Springs Open Source Hadoop/MySQL
 
HADOOP.pptx
HADOOP.pptxHADOOP.pptx
HADOOP.pptx
 
Big Data Architecture Workshop - Vahid Amiri
Big Data Architecture Workshop -  Vahid AmiriBig Data Architecture Workshop -  Vahid Amiri
Big Data Architecture Workshop - Vahid Amiri
 
Big data processing using hadoop poster presentation
Big data processing using hadoop poster presentationBig data processing using hadoop poster presentation
Big data processing using hadoop poster presentation
 
Hadoop
HadoopHadoop
Hadoop
 
Scalable and High available Distributed File System Metadata Service Using gR...
Scalable and High available Distributed File System Metadata Service Using gR...Scalable and High available Distributed File System Metadata Service Using gR...
Scalable and High available Distributed File System Metadata Service Using gR...
 
Big Data Unit 4 - Hadoop
Big Data Unit 4 - HadoopBig Data Unit 4 - Hadoop
Big Data Unit 4 - Hadoop
 

Último

TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 

Último (20)

TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 

Hadoop introduction

  • 1.
  • 2. 1. Introduction: Hadoop’s history and advantages 2. Architecture in detail 3. Hadoop in industry
  • 3.  Apache top level project, open-source implementation of frameworks for reliable, scalable, distributed computing and data storage.  It is a flexible and highly-available architecture for large scale computation and data processing on a network of commodity hardware.
  • 4.  Designed to answer the question: “How to process big data with reasonable cost and time?”
  • 7. Doug Cutting 2005: Doug Cutting and Michael J. Cafarella developed Hadoop to support distribution for the Nutch search engine project. The project was funded by Yahoo. 2006: Yahoo gave the project to Apache Software Foundation.
  • 9. • 2008 - Hadoop Wins Terabyte Sort Benchmark (sorted 1 terabyte of data in 209 seconds, compared to previous record of 297 seconds) • 2009 - Avro and Chukwa became new members of Hadoop Framework family • 2010 - Hadoop's Hbase, Hive and Pig subprojects completed, adding more computational power to Hadoop framework • 2011 - ZooKeeper Completed • 2013 - Hadoop 1.1.2 and Hadoop 2.0.3 alpha. - Ambari, Cassandra, Mahout have been added
  • 10. • Hadoop: • an open-source software framework that supports data- intensive distributed applications, licensed under the Apache v2 license. • Goals / Requirements: • Abstract and facilitate the storage and processing of large and/or rapidly growing data sets • Structured and non-structured data • Simple programming models • High scalability and availability • Use commodity (cheap!) hardware with little redundancy • Fault-tolerance • Move computation rather than data
  • 11.
  • 12. • Distributed, with some centralization • Main nodes of cluster are where most of the computational power and storage of the system lies • Main nodes run TaskTracker to accept and reply to MapReduce tasks, and also DataNode to store needed blocks closely as possible • Central control node runs NameNode to keep track of HDFS directories & files, and JobTracker to dispatch compute tasks to TaskTracker • Written in Java, also supports Python and Ruby
  • 13.
  • 14. • Hadoop Distributed Filesystem • Tailored to needs of MapReduce • Targeted towards many reads of filestreams • Writes are more costly • High degree of data replication (3x by default) • No need for RAID on normal nodes • Large blocksize (64MB) • Location awareness of DataNodes in network
  • 15. NameNode: • Stores metadata for the files, like the directory structure of a typical FS. • The server holding the NameNode instance is quite crucial, as there is only one. • Transaction log for file deletes/adds, etc. Does not use transactions for whole blocks or file-streams, only metadata. • Handles creation of more replica blocks when necessary after a DataNode failure
  • 16. DataNode: • Stores the actual data in HDFS • Can run on any underlying filesystem (ext3/4, NTFS, etc) • Notifies NameNode of what blocks it has • NameNode replicates blocks 2x in local rack, 1x elsewhere
  • 17.
  • 18.
  • 19. MapReduce Engine: • JobTracker & TaskTracker • JobTracker splits up data into smaller tasks(“Map”) and sends it to the TaskTracker process in each node • TaskTracker reports back to the JobTracker node and reports on job progress, sends data (“Reduce”) or requests new jobs
  • 20. • None of these components are necessarily limited to using HDFS • Many other distributed file-systems with quite different architectures work • Many other software packages besides Hadoop's MapReduce platform make use of HDFS
  • 21. • Hadoop is in use at most organizations that handle big data: o Yahoo! o Facebook o Amazon o Netflix o Etc… • Some examples of scale: o Yahoo!’s Search Webmap runs on 10,000 core Linux cluster and powers Yahoo! Web search o FB’s Hadoop cluster hosts 100+ PB of data (July, 2012) & growing at ½ PB/day (Nov, 2012)
  • 22. • Advertisement (Mining user behavior to generate recommendations) • Searches (group related documents) • Security (search for uncommon patterns) Three main applications of Hadoop:
  • 23. • Non-realtime large dataset computing: o NY Times was dynamically generating PDFs of articles from 1851-1922 o Wanted to pre-generate & statically serve articles to improve performance o Using Hadoop + MapReduce running on EC2 / S3, converted 4TB of TIFFs into 11 million PDF articles in 24 hrs
  • 24. • Design requirements: o Integrate display of email, SMS and chat messages between pairs and groups of users o Strong control over who users receive messages from o Suited for production use between 500 million people immediately after launch o Stringent latency & uptime requirements
  • 25. • System requirements o High write throughput o Cheap, elastic storage o Low latency o High consistency (within a single data center good enough) o Disk-efficient sequential and random read performance
  • 26. • Classic alternatives o These requirements typically met using large MySQL cluster & caching tiers using Memcached o Content on HDFS could be loaded into MySQL or Memcached if needed by web tier • Problems with previous solutions o MySQL has low random write throughput… BIG problem for messaging! o Difficult to scale MySQL clusters rapidly while maintaining performance o MySQL clusters have high management overhead, require more expensive hardware
  • 27. • Facebook’s solution o Hadoop + HBase as foundations o Improve & adapt HDFS and HBase to scale to FB’s workload and operational considerations  Major concern was availability: NameNode is SPOF & failover times are at least 20 minutes  Proprietary “AvatarNode”: eliminates SPOF, makes HDFS safe to deploy even with 24/7 uptime requirement  Performance improvements for realtime workload: RPC timeout. Rather fail fast and try a different DataNode
  • 28.  Distributed File System  Fault Tolerance  Open Data Format  Flexible Schema  Queryable Database
  • 29.  Need to process Multi Petabyte Datasets  Data may not have strict schema  Expensive to build reliability in each application  Nodes fails everyday  Need common infrastructure  Very Large Distributed File System  Assumes Commodity Hardware  Optimized for Batch Processing  Runs on heterogeneous OS
  • 30.  A Block Sever ◦ Stores data in local file system ◦ Stores meta-data of a block - checksum ◦ Serves data and meta-data to clients  Block Report ◦ Periodically sends a report of all existing blocks to NameNode  Facilitate Pipelining of Data ◦ Forwards data to other specified DataNodes
  • 31.  Replication Strategy ◦ One replica on local node ◦ Second replica on a remote rack ◦ Third replica on same remote rack ◦ Additional replicas are randomly placed  Clients read from nearest replica
  • 32.  Use Checksums to validate data – CRC32  File Creation ◦ Client computes checksum per 512 byte ◦ DataNode stores the checksum  File Access ◦ Client retrieves the data and checksum from DataNode ◦ If validation fails, client tries other replicas
  • 33.  Client retrieves a list of DataNodes on which to place replicas of a block  Client writes block to the first DataNode  The first DataNode forwards the data to the next DataNode in the Pipeline  When all replicas are written, the client moves on to write the next block in file
  • 34.  MapReduce programming model ◦ Framework for distributed processing of large data sets ◦ Pluggable user code runs in generic framework  Common design pattern in data processing ◦ cat * | grep | sort | uniq -c | cat > file ◦ input | map | shuffle | reduce | output
  • 35.  Log processing  Web search indexing  Ad-hoc queries
  • 36.  MapReduce Component ◦ JobClient ◦ JobTracker ◦ TaskTracker ◦ Child  Job Creation/Execution Process
  • 37.  JobClient ◦ Submit job  JobTracker ◦ Manage and schedule job, split job into tasks  TaskTracker ◦ Start and monitor the task execution  Child ◦ The process that really execute the task
  • 38.  Protocol ◦ JobClient <-------------> JobTracker ◦ TaskTracker <------------> JobTracker ◦ TaskTracker <-------------> Child  JobTracker impliments both protocol and works as server in both IPC  TaskTracker implements the TaskUmbilicalProtocol; Child gets task information and reports task status through it. JobSubmissionProtocol InterTrackerProtocol TaskUmbilicalProtocol
  • 39.  Check input and output, e.g. check if the output directory is already existing ◦ job.getInputFormat().validateInput(job); ◦ job.getOutputFormat().checkOutputSpecs(fs, job);  Get InputSplits, sort, and write output to HDFS ◦ InputSplit[] splits = job.getInputFormat(). getSplits(job, job.getNumMapTasks()); ◦ writeSplitsFile(splits, out); // out is $SYSTEMDIR/$JOBID/job.split
  • 40.  The jar file and configuration file will be uploaded to HDFS system directory ◦ job.write(out); // out is $SYSTEMDIR/$JOBID/job.xml  JobStatus status = jobSubmitClient.submitJob(jobId); ◦ This is an RPC invocation, jobSubmitClient is a proxy created in the initialization
  • 41.  JobTracker.submitJob(jobID) <-- receive RPC invocation request  JobInProgress job = new JobInProgress(jobId, this, this.conf)  Add the job into Job Queue ◦ jobs.put(job.getProfile().getJobId(), job); ◦ jobsByPriority.add(job); ◦ jobInitQueue.add(job);
  • 42.  Sort by priority ◦ resortPriority(); ◦ compare the JobPrioity first, then compare the JobSubmissionTime  Wake JobInitThread ◦ jobInitQueue.notifyall(); ◦ job = jobInitQueue.remove(0); ◦ job.initTasks();
  • 43.  JobInProgress(String jobid, JobTracker jobtracker, JobConf default_conf);  JobInProgress.initTasks() ◦ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”))); // mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split
  • 44.  splits = JobClient.readSplitFile(splitFile);  numMapTasks = splits.length;  maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i);  reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i);  JobStatus --> JobStatus.RUNNING
  • 45.  Task getNewTaskForTaskTracker(String taskTracker)  Compute the maximum tasks that can be running on taskTracker ◦ int maxCurrentMap Tasks = tts.getMaxMapTasks(); ◦ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double) remainingMapLoad/numTaskTrackers));
  • 46.  int numMaps = tts.countMapTasks(); // running tasks number  If numMaps < maxMapLoad, then more tasks can be allocated, then based on priority, pick the first job from the jobsByPriority Queue, create a task, and return to TaskTracker ◦ Task t = job.obtainNewMapTask(tts, numTaskTrackers);
  • 47.  initialize() ◦ Remove original local directory ◦ RPC initialization  TaskReportServer = RPC.getServer(this, bindAddress, tmpPort, max, false, this, fConf);  InterTrackerProtocol jobClient = (InterTrackerProtocol) RPC.waitForProxy(InterTrackerProtocol.class, InterTrackerProtocol.versionID, jobTrackAddr, this.fConf);
  • 48.  run();  offerService();  TaskTracker talks to JobTracker with HeartBeat message periodically ◦ HeatbeatResponse heartbeatResponse = transmitHeartBeat();
  • 49.  TaskTracker.localizeJob(TaskInProgress tip);  launchTasksForJob(tip, new JobConf(rjob.jobFile)); ◦ tip.launchTask(); // TaskTracker.TaskInProgress ◦ tip.localizeTask(task); // create folder, symbol link ◦ runner = task.createRunner(TaskTracker.this); ◦ runner.start(); // start TaskRunner thread
  • 50.  TaskRunner.run(); ◦ Configure child process’ jvm parameters, i.e. classpath, taskid, taskReportServer’s address & port ◦ Start Child Process  runChild(wrappedCommand, workDir, taskid);
  • 51.  Create RPC Proxy, and execute RPC invocation ◦ TaskUmbilicalProtocol umbilical = (TaskUmbilicalProtocol) RPC.getProxy(TaskUmbilicalProtocol.class, TaskUmbilicalProtocol.versionID, address, defaultConf); ◦ Task task = umbilical.getTask(taskid);  task.run(); // mapTask / reduceTask.run