The main objective of this workshop is to give the audience hands on experience with several Hadoop technologies and jump start their hadoop journey. In this workshop, you will load data and submit queries using Hadoop! Before jumping in to the technology, the Founders of DataKitchen review Hadoop and some of its technologies (MapReduce, Hive, Pig, Impala and Spark), look at performance, and present a rubric for choosing which technology to use when.
NOTE: To complete hands on poriton in the time allotted, attendees should come with a newly created AWS (Amazon Web Services) Account and complete the other prerequisites found in the DataKitchen blog <http: />.
Open Data Science Conference Big Data Infrastructure – Introduction to Hadoop with Map Reduce, Pig, and Hive
1. BIG DATA INFRASTRUCTURE –
INTRODUCTION TO HADOOP WITH
MAP REDUCE, PIG, AND HIVE
Gil Benghiat
Eric Estabrooks
Chris Bergh
O P E N
D A T A
S C I E N C E
C O N F E R E N C E
BOSTON 2015
@opendatasci
4. Meet DataKitchen
Chris Bergh
(Head Chef)
4
Gil Benghiat
(VP Product)
Eric Estabrooks
(VP Cloud and
Data Services)
Software development and executive experience delivering
enterprise software focused on Marketing and Health Care
sectors.
Deep Analytic Experience: Spent past decade solving
analytic challenges
New Approach To Data Preparation and Production:
focused on the Data Analysts and Data Scientists
5. 5
Analysts And Their Teams Are Spending
60-80% Of Their Time
On Data Preparation And Production
6. This creates an expectation gap
6
Analyze
Prepare Data
C
Analyze
Prepare Data
Business Customer
Expectation
Analyst
Reality
Communicate
The business does not
think that Analysts are
preparing data
Analysts don’t want to
prepare data
7. 7
DataKitchen is on a mission to
integrate and organize data to
make analysts and
data scientists
super-powered.
8. Meet the Audience: A few questions
• Who considers themselves
• Data scientist
• Data analyst
• Programmer / Scripter
• On the Business side
• Who knows SQL – can write a select statement?
• Who used AWS before today?
6/1/2015 8
10. What Is Apache Hadoop?
• Software framework
• Distributed processing of large scale datasets
• Cluster of commodity hardware
• Promise of lower cost
• Has many frameworks, modules and projects
6/1/2015 10
http://hadoop.apache.org/
12. Hadoop has been evolving
6/1/2015 12
Map Reduce
Impala
Hadoop Pig
2005 2007 2009 2011 2013 2015
Google Trends
“Big Data”
13. What is Hadoop good for?
• Problems that are huge, and can be run in
parallel over immutable data
• NOT OLTP
(e.g. backend to e-commerce site)
• Providing frameworks to build software
• Map Reduce
• Spark
• Tez
• A backend for visualization tools
6/1/2015 13
16. Test your system in the small
1. Make a small data set
2. Test like this:
$ cat data.txt | map | sort | reduce
6/1/2015 16
17. You can write map reduce jobs in your favorite language
Streaming Interface
• Lets you specify mappers and
reducer
• Supports
• Java
• Python
• Ruby
• Unix Shell
• R
• Any executable
Map Reduce “generators”
• Results in map reduce jobs
• PIG
• Hive
6/1/2015 17
18. Applications that lend themselves to map reduce
• Word Count
• PDF Generation (NY Times 11,000,000 articles)
• Analysis of stock market historical data (ROI and standard deviation)
• Geographical Data (Finding intersections, rendering map files)
• Log file querying and analysis
• Statistical machine translation
• Analyzing Tweets
6/1/2015 18
19. Pig
• Pig Latin - the scripting language
• Grunt – Shell for executing Pig Commands
6/1/2015 19
http://www.slideshare.net/kevinweil/hadoop-pig-and-twitter-nosql-east-2009
20. This is what it would be in Java
6/1/2015 20
http://www.slideshare.net/kevinweil/hadoop-pig-and-twitter-nosql-east-2009
21. Hive
You write SQL! Well, almost, it is HiveQL
6/1/2015 21
SELECT *
FROM user
WHERE active = 1;
JDBC
SQL
Workbench
HUE
AWS
S3
22. Impala
• Uses SQL very similar to HiveQL
• Runs 10-100x faster than Hive Map Reduce
• Runs in memory so it may not scale up as well
• Some batch jobs may run faster on Impala than Hive
• Great for developing your code on a small data set
• Can use interactively with Tableau and other BI tools
6/1/2015 22
23. • Had a version of SQL called Shark
• Shark has been replaced by Spark SQL
• Hive on Spark is under development
• Spark SQL is faster than Shark
• Runs 100x faster than Hive Map Reduce
• Can use interactively with Tableau and other BI tools
6/1/2015 23
29. Today, we will use EMR to run Hadoop
• EMR = Elastic Map Reduce
• Amazon does almost all of the work to create a cluster
• Offers a subset of modules and projects
6/1/2015 29
OR
37. Let’s Do This!
6/1/2015 37
What do we need?
• AWS Account
• Key (.pem file)
• The data file in the S3 bucket
What will we do?
• Start Cluster
• MR Hive
• MR Pig
• Impala
• Sum county level
census data by state.
Prerequisites and scripts are
located at
http://www.datakitchen.io/blog
41. Cluster Options
6/1/2015 41
Cluster Configuration mod
Tags defaults
Software Configuration mod
File System Configuration defaults
Hardware Configuration mod
Security and Access mod
IAM Roles defaults
Bootstrap Actions defaults
Steps defaults
57. Post ODSC Update: An easier way to access Hue
(foxyproxy slowed us down)
For Windows, Unix, and Mac, use ssh to establish a tunnel
$ ssh -i datakitchen-training.pem -L 8888:localhost:8888 hadoop@ec2-54-
152-244-88.compute-1.amazonaws.com
From the browser, go to
http://localhost:8888
You may need to fix the permissions on the .pem file:
$ chmod 400 datakitchen-training.pem
With the cygwin version of ssh, you may have to fix the group of the .pem file before the chmod
command.
$ chgrp Users datakitchen-training.pem
6/1/2015 57
58. Post ODSC Update: On Windows, you can use
putty to establish a tunnel
1. Download PuTTY.exe to your computer from:
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
2. Start PuTTY.
3. In the Category list, click Session
4. In the Host Name field, type hadoop@ec2-54-152-244-88.compute-1.amazonaws.com
5. In the Category list, expand Connection > SSH > Auth
6. For Private key file for authentication, click Browse and select the private key file (datakitchen-training.ppk) used
to launch the cluster.
7. In the Category list, expand Connection > SSH, and then click Tunnels.
8. In the Source port field, type 8888.
9. In the Destination type localhost:8888
10. Verify the Local and Auto options are selected.
11. Click Add.
12. Click Open.
13. Click Yes to dismiss the security alert.
6/1/2015 58
Now this will work
http://localhost:8888
72. PIG Export Our Data
6/1/2015 72
UPDATE with
your bucket
name
73. IMPALA: From the shell window
Type: impala-shell
>invalidate metadata
>show tables;
>
> quit
You can type “pig” or “hive” at the command line and run the scripts
here, without Hue.
6/1/2015 73
76. Recap
Presentation
• Hadoop is an evolving ecosystem of projects
• It is well suited for big data
• Use something else for medium or small data
Doing
• Started a Hadoop cluster via the AWS Console (Web UI)
• Loaded Data
• Wrote some queries
6/1/2015 76
77. 77
Thank you!
To continue the discussion,
contact us at
info@datakitchen.io
gil@datakitchen.io
eestabrooks@datakitchen.io
cbergh@datakitchen.io