2. INTRODUCTION
Hadoop is a framework that allows for the distributed processing of
large data sets across clusters of commodity computer using a simple
programming model.
It is an open-source data management with scale-out storage &
distributed processing.
The objective of this tool is to support running applications on
BigData.
It is an open-source set of tools and distributed under Apache license.
3. BigData
• Big data is a term used to describe the voluminous amount of unstructured
and semi-structured data a company creates.
• Data that would take too much time and cost too much money to load into a
relational database for analysis.
• Big data doesn't refer to any specific quantity, the term is often used when
speaking about petabytes and exabytes of data.
4. Characteristics of Big Data
Volume
• Data
quantity
Velocity
• Data
Speed
Variety
• Data
Types
5. What Caused The Problem?
1
2 1
2
Year
Standard Hard Drive Size
(in Mb)
1990 1370
2010 1000000
Year
Data Transfer Rate
(Mbps)
1990 4.4
2010 100
7. So,What Is The Problem?
The transfer speed is around 100 MB/s
A standard disk is 1 Terabyte
Time to read entire disk= 10000 seconds or 3 Hours!
Increase in processing time may not be as helpful because
• Network bandwidth is now more of a limiting factor
• Physical limits of processor chips have been reached
8. So What do We Do?
•The obvious solution is that we use multiple
processors to solve the same problem by
fragmenting it into pieces.
•Imagine if we had 100 drives, each holding
one hundredth of the data. Working in
parallel, we could read the data in under two
minutes.
11. Hadoop core component
There are two parts of Hadoop:-
HDFS (Hadoop distributed file system)
Mapreduce (Processing)
12. MapReduce
Hadoop limits the amount of communication which can be performed by
the processes, as each individual record is processed by a task in isolation
from one another
By restricting the communication between nodes, Hadoop makes the
distributed system much more reliable. Individual node failures can be
worked around by restarting tasks on other machines.
The other workers continue to operate as though nothing went wrong,
leaving the challenging aspects of partially restarting the program to the
underlying Hadoop layer.
Map : (in_value,in_key)(out_key, intermediate_value)
Reduce: (out_key, intermediate_value) (out_value list)
13. What is MapReduce?
MapReduce is a programming model
Programs written in this functional style are automatically parallelized and executed
on a large cluster of commodity machines
MapReduce is an associated implementation for processing and generating large
data sets.
MapReduce
MAP
map function that
processes a key/value pair
to generate a set of
intermediate key/value
pairs
REDUCE
and a reduce function
that merges all
intermediate values
associated with the same
intermediate key.
14. The Programming Model Of MapReduce
Map, written by the user, takes an input pair and produces a set of
intermediate key/value pairs. The MapReduce library groups together
all intermediate values associated with the same intermediate key I and
passes them to the Reduce
15. The Reduce function, also written by the user, accepts an intermediate key I
and a set of values for that key. It merges together these values to form a
possibly smaller set of values
16. How MapReduce Works
A Map-Reduce job usually splits the input data-set into independent chunks
which are processed by the map tasks in a completely parallel manner.
The framework sorts the outputs of the maps, which are then input to the
reduce tasks.
Typically both the input and the output of the job are stored in a file-
system. The framework takes care of scheduling tasks, monitoring them and
re-executes the failed tasks.
A MapReduce job is a unit of work that the client wants to be performed: it
consists of the input data, the MapReduce program, and configuration
information. Hadoop runs the job by dividing it into tasks, of which there
are two types: map tasks and reduce tasks
18. Fault Tolerance
There are two types of nodes that control the job execution process:
tasktrackers and jobtrackers
The jobtracker coordinates all the jobs run on the system by scheduling
tasks to run on tasktrackers.
Tasktrackers run tasks and send progress reports to the jobtracker, which
keeps a record of the overall progress of each job.
If a tasks fails, the jobtracker can reschedule it on a different tasktracker.
24. Combiner Functions
• Many MapReduce jobs are limited by the bandwidth available on the
cluster.
• In order to minimize the data transferred between the map and reduce tasks,
combiner functions are introduced.
• Hadoop allows the user to specify a combiner function to be run on the map
output—the combiner function’s output forms the input to the reduce
function.
• Combiner finctions can help cut down the amount of data shuffled between
the maps and the reduces.
25. Hadoop Streaming:
• Hadoop provides an API to MapReduce that allows you to write your
map and reduce functions in languages other than Java.
• Hadoop Streaming uses Unix standard streams as the interface
between Hadoop and your program, so you can use any language
that can read standard input and write to standard output to write
your MapReduce program.
26. Hadoop Pipes:
• Hadoop Pipes is the name of the C++ interface to Hadoop MapReduce.
• Unlike Streaming, which uses standard input and output to
communicate with the map and reduce code, Pipes uses sockets as the
channel over which the tasktracker communicates with the process
running the C++ map or reduce function. JNI is not used.
27. HADOOP DISTRIBUTED FILESYSTEM (HDFS)
Filesystems that manage the storage across a network of machines are
called distributed filesystems.
Hadoop comes with a distributed filesystem called HDFS, which stands for
Hadoop Distributed Filesystem.
HDFS, the Hadoop Distributed File System, is a distributed file system
designed to hold very large amounts of data (terabytes or even petabytes),
and provide high-throughput access to this information.
28. Namenodes and Datanodes
A HDFS cluster has two types of node operating in a master-worker
pattern: a namenode (the master) and a number of datanodes
(workers).
The namenode manages the filesystem namespace. It maintains the
filesystem tree and the metadata for all the files and directories in the
tree.
Datanodes are the work horses of the filesystem. They store and
retrieve blocks when they are told to (by clients or the namenode), and
they report back to the namenode periodically with lists of blocks that
they are storing.
29. Without the namenode, the filesystem cannot be used. In fact, if the
machine running the namenode were obliterated, all the files on the
filesystem would be lost since there would be no way of knowing how
to reconstruct the files from the blocks on the datanodes.
Important to make the namenode resilient to failure, and Hadoop
provides two mechanisms for this:
1. is to back up the files that make up the persistent state of the
filesystem metadata. Hadoop can be configured so that the namenode
writes its persistent state to multiple filesystems.
2. Another solution is to run a secondary namenode. The secondary
namenode usually runs on a separate physical machine, since it
requires plenty of CPU and as much memory as the namenode to
perform the merge. It keeps a copy of the merged namespace image,
which can be used in the event of the namenode failing
30. File System Namespace
HDFS supports a traditional hierarchical file organization. A user or an
application can create and remove files, move a file from one directory
to another, rename a file, create directories and store files inside these
directories.
HDFS does not yet implement user quotas or access permissions. HDFS
does not support hard links or soft links. However, the HDFS
architecture does not preclude implementing these features.
The Namenode maintains the file system namespace. Any change to
the file system namespace or its properties is recorded by the
Namenode. An application can specify the number of replicas of a file
that should be maintained by HDFS. The number of copies of a file is
called the replication factor of that file. This information is stored by
the Namenode.