The document provides an overview of big data analytics and Hadoop. It defines big data and the challenges of working with large, complex datasets. It then discusses Hadoop as an open-source framework for distributed storage and processing of big data across clusters of commodity hardware. Key components of Hadoop include HDFS for storage, MapReduce for parallel processing, and other tools like Pig, Hive, HBase etc. The document provides examples of how Hadoop is used by many large companies and describes the architecture and basic functions of HDFS and MapReduce.
2. Agend
a Big Data –
Concepts overview
Analytics –
Concepts overview
Hadoop –
Concepts overview
HDFS
Concepts overview
Data Flow - Read & Write
Operation
MapReduce
Concepts overview
WordCount Program
Use Cases
Landscape
Hadoop Features & Summary
3. What is Big
Data?Big data is data which is too large, complex and dynamic for any conventional data tools to capture,
store, manage and analyze.
4. Challenges of Big
Data
• Storage (~
Petabytes)
1
• Processing (Timely
manner)
• Variety of Data (Structured,
Semi
Structured,Un-structured)
• Cos
t
2
3
4
5. Big Data
AnalyticsBig data analytics is the process of examining large
amounts of data of a variety of types (big data) to
uncover hidden patterns, unknown correlations and other
useful information.
Big Data AnalyticsSolutions
There are many different Big Data Analytics Solutions out
in the market.
Tableau – visualization tools
SAS – Statistical computing
IBM and Oracle –They have a range of tools for Big Data
Analysis
Revolution – Statistical computing
R – Open source tool for Statisticalcomputing
6. What is Hadoop?
Open-source data storage and processingAPI
Massively scalable, automaticallyparallelizable
Based on work from Google
GFS + MapReduce + BigTable
Current Distributions based on Open Source and VendorWork
Apache Hadoop
Cloudera – CDH4
Hortonworks
MapR
AWS
Windows Azure HDInsight
7. Why Use Hadoop?
Cheaper
Scales to Petabytes
or more
Faster
Parallel data
processing
Better
Suited for particular
types of BigData
problems
9. Comparing:RDBMS vs.
HadoopTraditional RDBMS Hadoop / MapReduce
Data Size Gigabytes (Terabytes) Petabytes (Hexabytes)
Access Interactive and Batch Batch – NOT Interactive
Updates Read / Write many times Write once, Read many times
Structure Static Schema Dynamic Schema
Integrity High (ACID) Low
Scaling Nonlinear Linear
Query
ResponseTime
Can be near immediate Has latency (due tobatch
processing)
10. Where is Hadoop
used?
Technology
Industry Use Cases
Search
People you may know
Movie recommendations
Banks
Fraud Detection
Regulatory
Risk management
Media
Retail
Marketing analytics
Customer service
Product recommendations
Manufacturing Preventive maintenance
11. Companies Using Hadoop
Search
Yahoo,Amazon,Zvents
Log Processing
Facebook,Yahoo,
ContextWeb.Joost,Last.fm
Recommendation
Systems
Facebook,Linkedin
DataWarehouse
Facebook,AOL
Video & ImageAnalysis
NewYorkTimes,Eyealike
------- Almost in every
domain!
12. Hadoop is a set of Apache
Frameworks and more…
Data storage (HDFS)
Runs on commodity hardware (usually
Linux)
Horizontally scalable
Processing (MapReduce)
Parallelized (scalable) processing Fault Tolerant
Other Tools / Frameworks
Data Access
HBase, Hive, Pig,
Mahout
Tools
Hue, Sqoop
Monitoring
Greenplum, Cloudera
Hadoop Core - HDFS
MapReduceAPI
Monitoring &Alerting
Tools & Libraries
DataAccess
13. Core parts of Hadoop
distribution
HDFS Storage
Redundant (3copies)
For large files – large
blocks
64 or 128 MB / block
Can scale to 1000s of
nodes
MapReduce API
Batch (Job) processing
Distributed and Localized
to clusters (Map)
Auto-Parallelizable for
huge amounts of data
Fault-tolerant (auto
retries)
Adds high availability and
more
Other Libraries
Pig
Hive
HBase
Others
14. Hadoop Cluster HDFS
(Physical) Storage
Name Node
Data Node 1 Data Node 2 Data Node 3
Secondary
Name Node
• Contains web site to view
cluster information
• V2 Hadoop uses multiple
Name Nodes for HA
One Name Node
Many Data Nodes
• 3 copies of each node by
default
Work with data in HDFS
• Using common Linux shell
commands
• Block size is 64 or 128 MB
18. HDFS
:Architecture Master
NameNode
Slave
Bunch of DataNodes
HDFS Layers
NameNode
Storage
…………
NS
Block Management
NameNode
DataNode
DataNode DataNode DataNode DataNode DataNode
DataNode
Name
Space
Block
Storage
19. HDFS : Basic
Features
Highly fault-
tolerant High
throughput
Suitable for applications with large data
sets Streaming access to file system
data
Can be built out of commodity hardware
20. HDFS Write
(1/2)
Client Name Node
1
2
Data Node
A
Data Node
B
Data Node
C
Data Node
D
A2 A3 A4A1
3
Client contacts NameNode to write data
NameNode says write it to thesenodes
Client sequentiallywrites
blocks to DataNode
21. HDFS Write
(2/2)
Client Name Node
Data Node
A
Data Node
B
Data Node
C
Data Node
D
A1
DataNodes replicatedata
blocks, orchestrated
by the NameNode
A2
A4
A2 A1
A3
A3 A2
A4
A4 A1
A3
22. HDFS
Read
Client Name Node
1
2
Data Node
A
Data Node
B
Data Node
C
Data Node
D
3
Client contacts NameNode to read data
NameNode says you can findit here
Client sequentially
reads blocks from
DataNode
A1 A2
A4
A2 A1
A3
A3 A2
A4
A4 A1
A3
23. HA (High Availability) for
NameNode
NameNode (StandBy)
DataNode
NameNode (Active)
Active NameNode
Do normal namenode’s operation
Standby NameNode
Maintain NameNode’s data
Ready to be active NameNode
DataNode DataNode DataNode DataNode
24. MapRedu
ce
MapReduce job consist of two tasks
Map Task
Reduce Task
Blocks of data distributed across several
machinesare processed by map tasks parallel
Results are aggregated in the reducer
Works only on KEY/VALUE pair
25. MapReduce:Word
Count
Deer 1
Bear 1
River 1
Car 1
Car 1
River 1
Deer 1
Car 1
Bear 1
Bear 2
Car 3
Deer 2
River 2
Can we do word count in parallel?
Deer Bear River
Car Car River
Deer Car Bear
34. Hadoop Features &
SummaryDistributed frame work for processing and storing
data generally on commodity hardware.
Completely open source and written in Java.
Store anything
Unstructured or semi structured data,
Storage capacity
Scale linearly, cost in not exponential.
Data locality and process in yourway.
Code moves todata
In MR you specify the actual steps in processing the data and drive the out put.
Stream access: Process data in any language.
Failure and fault tolerance:
Detect Failure and Heals itself.
Reliable, data replicated, failed task are rerun , no need maintain backup of data
Cost effective: Hadoop is designed to be a scale-out architecture operating on a cluster of
commodity
PC machines.
The Hadoop framework transparently for customization to provides applications both reliability,
adaption
and data motion.
Primarily used for batch processing, not real-time/ transactional user applications.
35. References -
Hadoop
Hadoop:The Definitive Guide,Third Edition by
Tom White.
http://hadoop.apache.org
http://www.cloudera.com
http://ambuj4bigdata.blogspot.com
http://ambujworld.wordpress.com