2. Any one who want know about spark. No specific prerequists are required.
It is not a tutorial to learn spark !.
Intension of presantation is to introduce spark and an overview in general users
prospective. We are not going to cover any concepts of specific to
developer/programming or adminstrative aspects.
Audience & Intension
Sudhakara.st
Mail:sudhakara.st@gmail.com
https://in.linkedin.com/in/sudhakara-st-82820539
3. Agenda
Introduction to Spark
Spark
What leads to spark trending
Spark components.
Resilient Distributed Dataset(RDD)
Input to spark
Benefits for spark
Spark “Word count” example
Spark VS Hadoop.
Conclusion.
4. Credits
Content and images source
http://spark.apache.org/
https://databricks.com/
Learning Spark - O'Reilly Media
By Holden Karau, Andy Konwinski,
Patrick Wendell, Matei Zaharia
5. Apache :
Spark™ is a fast and general engine for large-scale data
processing.
Datairicks:
Spark™ is a powerful open source processing engine built
around speed, ease of use, and sophisticated analytics.
Spark is open source distributed computing engine for data
processing and data analytics.
It was originally developed at UC Berkeley in 2009
6. What leads to Spark trending !.
Just in time dataware house
Today enterprise have variety of data realtime, streaming,
batch and analytics. Spark is designed for that.
Big data is versatile. Spark execution engine in handles
versatility, its every growing library help for that .
Spark bring data processing, analyze and analytics brings to
one platform.
Spark significantly simplifies Bigdata processing. Hosts end
to end platform. Ingest to product
7. What leads to Spark trending !. Continue..
Spark support wide range of ecosystem & apps
8. Spark friendly !
Apache Spark is a general-purpose, distribute cluster
computing, data processing framework that, like
MapReduce in Apache Hadoop, offers powerful
abstractions for processing large dataset
Apache Spark is designed to work seamlessly with Hadoop*,
Amazon S3, Cassandra or as a standalone application.
Support languages:
Rich set High level APIs and increases user productivity
Integration with new & existing system.
11. Spark Components continue…
The Spark core is complemented by a set of powerful,
higher-level libraries
SparkSQL
Spark Streaming,
MLlib (for machine learning)
GraphX
Scala, Java, Python the language in which Spark is written.
12. Spark Core
Spark Core is the base engine for large-scale parallel and
distributed data processing. It is responsible for:
memory management and fault recovery
scheduling, distributing and monitoring tasks, jobs on a
cluster
interacting with storage systems
13. SparkSQL
SparkSQL is a Spark component that supports querying data
either via SQL or via the Hive Query Language. It originated
as the Apache Hive port to run on top of Spark (in place of
MapReduce) and is now integrated with the Spark stack. In
addition to providing support for various data sources, it
makes it possible to weave SQL queries with code
transformations which results in a very powerful tool.
Below is an example of a Hive compatible query:
// sc is an existing
SparkContext. val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
sqlContext.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
// Queries are expressed in HiveQL
sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)
14. Spark Streaming
Spark Streaming supports real time processing of streaming
data, Spark Streaming is an extension of the core Spark API
that enables scalable, high-throughput, fault-tolerant stream
processing of live data streams.
Data can be ingested: Kafka, Flume, Twitter, ZeroMQ, Kinesis
or TCP sockets
Processed data can be pushed out to filesystems, databases,
and live dashboards. In fact, you can apply Spark’s machine
learning and graph processing algorithms on data streams
15. Spark Streaming continue..
Spark Streaming receives live input data streams and divides
the data into batches, which are then processed by the
Spark engine to generate the final stream of results in
batches
16. MLlib
MLlib is a machine learning library that provides various
algorithms designed to scale out on a cluster for
classification, regression, clustering, collaborative filtering,
and so on (check out Toptal’s article on machine learning for
more information on that topic).
These algorithms also work with streaming data, such as
linear regression using ordinary least squares or k-means
clustering (and more on the way). Apache Mahout (a
machine learning library for Hadoop) has already turned
away from MapReduce and joined forces on Spark MLlib.
17. Resilient Distributed Dataset(RDD)
Spark introduces the concept of an RDD , an immutable fault-tolerant,
distributed collection of objects that can be operated on in parallel.
RDD can contain any type of object and is created by loading an
external dataset or distributing a collection from the driver program.
RDDs support two types of operations:
Transformations : transform one data collection into another (such as
map, filter, join, union, and so on), that are performed on an RDD and
which yield a new RDD containing the result. Means create a new
dataset from an existing one
Actions : require that the computation be performed (such as reduce,
count, first, collect, save and so on) that return a value after running a
computation on an RDD.
which return a value to the driver program or file after running a
computation on the dataset.
18. Resilient Distributed Dataset continue..
RDD which is a fault-tolerant collection of
elements/partitions that can be operated on in parallel
across the nodes.
Properties for RDD:
Immutability
Cacheable – linage – persist
Lazy evaluation (it different than execution)
Type Inferred
Two ways to create RDDs: parallelizing an existing collection
in your driver program, or referencing a dataset in an
external storage system, such as a shared filesystem, HDFS,
Hbase, S3, Cassandra or any data source offering a Hadoop
InputFormat.
20. Input for Spark continue…
Spark File Based input
Spark’s file-based input methods, including textFile, support running
on directories, compressed files, and wildcards as well.
Eg. you can use textFile("/my/directory"),
textFile("/my/directory/*.txt"), and textFile("/my/directory/*.gz").
The textFile method also takes an optional second argument for
controlling the number of partitions of the file.
By default, Spark creates one partition for each HDFS block of the file,
but you can also ask for a higher number of partitions by passing a
larger value.
JavaRDD<String> distFile = sc.textFile("data.txt");
This is in contrast with textFile, which would return one record per line in
each file
22. Benefits of spark
Fault recovery
In memory – processing
Scalable
Fast
Rich set of Library
Optimized
Unified tool set
Easy Programming- Spark and scala APIs are fairly high level
24. Spark “Word count” continue…
The first thing a Spark program has to do is create a
SparkContext object,
SparkContext represents a connection to a Spark cluster, and
can be used to create RDDs, accumulators and broadcast
variables on that cluster.
To create a SparkContext, you first need to create a
SparkConf object to configure your application
// Create a Java Spark Context.
SparkConf conf = new SparkConf().setAppName("JavaWordCount");
//SparkConf conf = new
SparkConf().setAppName("org.sparkexample.WordCount").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
25. Spark “Word count” continue…
Create an RDD from a file
RDDs can be created from Hadoop InputFormats (such as
HDFS files) or by transforming other RDDs. The following
code uses the SparkContext to define a base RDD from the
file inputFile
Parallelized collections are created by calling
JavaSparkContext’s parallelize method on an existing
Collection in your driver program
// Create a Java Spark Context.
String inputFile = args[0];
JavaRDD input = sc.textFile(inputFile);
26. Spark “Word count” continue…
Transform input RDD with flatMap
To split the input text into separate words, we use the
flatMap(func) RDD transformation, which returns a new
RDD formed by passing each element of the source through
a function. The String split function is applied to each line of
text, returning an RDD of the words in the input RDD:
// map/split each line to multiple words
JavaRDD<String> words = input.flatMap( new
FlatMapFunction<String, String>() {
public Iterable<String> call(String x) {
return Arrays.asList(x.split(" "));
} } );
27. Spark “Word count” continue…
Transform words RDD with map
We use the map(func) to transform the words RDD into an
RDD of (word, 1) key-value pairs:
JavaPairRDD<String, Integer> wordOnePairs = words.mapToPair(
new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String x) {
return new Tuple2(x, 1); } } );
28. Transform wordOnePairs RDD with reduceByKey
To count the number of times each word occurs, we combine
the values (1) in the wordOnePairs with the same key (word)
using reduceByKey(func),
This transformation will return an RDD of (word, count) pairs
where the values for each word are aggregated using the given
reduce function func x+y:
// reduce add the pairs by key to produce counts
JavaPairRDD<String, Integer> counts = wordOnePairs.reduceByKey (
new Function2<Integer, Integer, Integer>() {
public Integer call(Integer x, Integer y) {
return x + y; } } );
29. Spark “Word count” continue…
Output with RDD action saveAsTextFile
Finally, the RDD action saveAsTextFile(path) writes the
elements of the dataset as a text file (or set of text files) in
the outputFile directory
String outputFile = args[1];
// Save the word count back out to a text file, causing evaluation.
counts.saveAsTextFile(outputFile);
30. Spark “Word count” continue…
Running Your Application
You use the bin/spark-submit script to launch your application.
This script takes care of setting up the classpath with Spark
and its dependencies. Here is the spark-submit format:
$./bin/spark-submit --class <main-class> --master <master-url>
<application-jar> [application-arguments]
$bin/spark-submit --class example.wordcount.JavaWordCount --master yarn sparkwordcount-1.0.jar
/user/user01/input/alice.txt /user/user01/output
//Here is the spark-submit command to run the scala SparkWordCount:
$bin/spark-submit --class SparkWordCount --master yarn sparkwordcount-1.0.jar /user/user01/input/alice.txt
/user/user01/output
31. Spark vs Hadoop
Hello.. Spark Or Hadoop Which Is The Best Big Data
Framework ?
Hey…Spark has overtaken Hadoop as most active open
source Big Data project !.
The fact is they are not directly comparable products. Why ?
They do not perform exactly the same tasks, and they are
not mutually exclusive, as they are able to work together.
They provide some of the most popular tools used to carry
out common Big Data-related tasks.
33. Spark vs Hadoop continue…
Spark the edge over Hadoop is speed.
Spark handles most of its operations and data “in memory” –
copying them from distributed physical storage into far faster
logical RAM memory.
This reduces amount of time consuming writing/reading to
hard disk each level/phase, other end needs to be done under
Hadoop’s MapReduce system
MapReduce writes all of the data back to the physical storage
medium after each operation
Spark support iterative, interactive and batch data
processing. Hadoop limited batch processing !
34. Spark vs Hadoop continue…
Although Spark is reported to work up to 100 times faster
than Hadoop in certain circumstances, but it does not
provide its own distributed storage system. Spark does not
include its own storage system for organizing files. Hadoop
has it!.
Spark’s advanced analytics applications can make use of
data stored using the HDFS in data processing layer.
Spark includes its own machine learning libraries, called
MLib, whereas Hadoop systems must be interfaced with a
other machine learning library, for example Apache
Mahout.
35. Spark vs Hadoop continue…
Apache Spark may only be the processing step in your ETL
(Extract, Transform, Load) chain. It doesn't provide the
stabled rich tool set that the Hadoop ecosystem contains.
You may still need Hbase/Nutch/Solr for data acquisition
Hadoop has wide ranges tools
Sqoop and Flume for moving data; Oozie for scheduling; and
HBase, or Hive for storage.
The point that I’m making is that although Apache Spark is a
very powerful processing system, it should be considered a
part of the wider Hadoop ecosystem
To summarize Hadoop and Spark are Perfect Together &
Spark fits in Hadoop data processing layer.
Both we can do better !!.
36. Spark is Heir to Mapreduce
MapReduce is not the best framework for all computations !
To perform complex operations, many Map and Reduce
phases must be strung together. It limited with respect to
complex and iterative operations
Spark support Varity of data sources. It is robust !.
Spark support iterative, interactive and batch data
processing. It is fast!.
It’s entirely possible to re-implement MapReduce-like
computations in Spark. It is easy!
37. When spark is not needed !
Your Big Data simply consists of a huge amount of very
structured data (i.e customer names and addresses) or may
have no need for the advanced streaming analytics and
machine learning functionality provided by Spark.
Spark, although developing very quickly, is still in its infancy,
and the security and support infrastructure is not as
advanced.
38. Who use Spark.
Spark is being adopted by major players like Amazon, eBay,
and Yahoo! Many organizations run Spark on clusters with
thousands of nodes. According to the Spark FAQ
39. Conclusion
Apache Spark is a cluster computing platform designed to be
fast, speed side and extends the popular MapReduce model to
efficiently support more types of computations, including
interactive queries and stream processing. Spark integrates
closely with other Big Data tools, this tight integration is the
ability to build applications that seamlessly combine different
processing models.
Spark is fit wide range (almost all) usecase because of its
versatility, integration and rich set different libraries.
People fall in love with spark
Enterprises – fit for all, open source
Mangers – less resource, productivity
Developers – High level language
Data scientist – Algorithms, simple API