SlideShare uma empresa Scribd logo
1 de 42
Spark
Any one who want know about spark. No specific prerequists are required.
It is not a tutorial to learn spark !.
Intension of presantation is to introduce spark and an overview in general users
prospective. We are not going to cover any concepts of specific to
developer/programming or adminstrative aspects.
Audience & Intension
Sudhakara.st
Mail:sudhakara.st@gmail.com
https://in.linkedin.com/in/sudhakara-st-82820539
Agenda
Introduction to Spark
Spark
What leads to spark trending
Spark components.
Resilient Distributed Dataset(RDD)
Input to spark
Benefits for spark
Spark “Word count” example
Spark VS Hadoop.
Conclusion.
Credits
Content and images source
 http://spark.apache.org/
 https://databricks.com/
 Learning Spark - O'Reilly Media
By Holden Karau, Andy Konwinski,
Patrick Wendell, Matei Zaharia
Apache :
Spark™ is a fast and general engine for large-scale data
processing.
Datairicks:
Spark™ is a powerful open source processing engine built
around speed, ease of use, and sophisticated analytics.
Spark is open source distributed computing engine for data
processing and data analytics.
It was originally developed at UC Berkeley in 2009
What leads to Spark trending !.
Just in time dataware house
Today enterprise have variety of data realtime, streaming,
batch and analytics. Spark is designed for that.
Big data is versatile. Spark execution engine in handles
versatility, its every growing library help for that .
Spark bring data processing, analyze and analytics brings to
one platform.
 Spark significantly simplifies Bigdata processing. Hosts end
to end platform. Ingest to product
What leads to Spark trending !. Continue..
 Spark support wide range of ecosystem & apps
Spark friendly !
Apache Spark is a general-purpose, distribute cluster
computing, data processing framework that, like
MapReduce in Apache Hadoop, offers powerful
abstractions for processing large dataset
Apache Spark is designed to work seamlessly with Hadoop*,
Amazon S3, Cassandra or as a standalone application.
Support languages:
Rich set High level APIs and increases user productivity
Integration with new & existing system.
Spark friendly ! continue…
Spark Components
Spark Components continue…
The Spark core is complemented by a set of powerful,
higher-level libraries
 SparkSQL
 Spark Streaming,
 MLlib (for machine learning)
 GraphX
Scala, Java, Python the language in which Spark is written.
Spark Core
Spark Core is the base engine for large-scale parallel and
distributed data processing. It is responsible for:
 memory management and fault recovery
 scheduling, distributing and monitoring tasks, jobs on a
cluster
 interacting with storage systems
SparkSQL
SparkSQL is a Spark component that supports querying data
either via SQL or via the Hive Query Language. It originated
as the Apache Hive port to run on top of Spark (in place of
MapReduce) and is now integrated with the Spark stack. In
addition to providing support for various data sources, it
makes it possible to weave SQL queries with code
transformations which results in a very powerful tool.
Below is an example of a Hive compatible query:
// sc is an existing
SparkContext. val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
sqlContext.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
// Queries are expressed in HiveQL
sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)
Spark Streaming
Spark Streaming supports real time processing of streaming
data, Spark Streaming is an extension of the core Spark API
that enables scalable, high-throughput, fault-tolerant stream
processing of live data streams.
Data can be ingested: Kafka, Flume, Twitter, ZeroMQ, Kinesis
or TCP sockets
Processed data can be pushed out to filesystems, databases,
and live dashboards. In fact, you can apply Spark’s machine
learning and graph processing algorithms on data streams
Spark Streaming continue..
Spark Streaming receives live input data streams and divides
the data into batches, which are then processed by the
Spark engine to generate the final stream of results in
batches
MLlib
MLlib is a machine learning library that provides various
algorithms designed to scale out on a cluster for
classification, regression, clustering, collaborative filtering,
and so on (check out Toptal’s article on machine learning for
more information on that topic).
These algorithms also work with streaming data, such as
linear regression using ordinary least squares or k-means
clustering (and more on the way). Apache Mahout (a
machine learning library for Hadoop) has already turned
away from MapReduce and joined forces on Spark MLlib.
Resilient Distributed Dataset(RDD)
Spark introduces the concept of an RDD , an immutable fault-tolerant,
distributed collection of objects that can be operated on in parallel.
RDD can contain any type of object and is created by loading an
external dataset or distributing a collection from the driver program.
RDDs support two types of operations:
 Transformations : transform one data collection into another (such as
map, filter, join, union, and so on), that are performed on an RDD and
which yield a new RDD containing the result. Means create a new
dataset from an existing one
 Actions : require that the computation be performed (such as reduce,
count, first, collect, save and so on) that return a value after running a
computation on an RDD.
which return a value to the driver program or file after running a
computation on the dataset.
Resilient Distributed Dataset continue..
RDD which is a fault-tolerant collection of
elements/partitions that can be operated on in parallel
across the nodes.
Properties for RDD:
 Immutability
 Cacheable – linage – persist
 Lazy evaluation (it different than execution)
 Type Inferred
Two ways to create RDDs: parallelizing an existing collection
in your driver program, or referencing a dataset in an
external storage system, such as a shared filesystem, HDFS,
Hbase, S3, Cassandra or any data source offering a Hadoop
InputFormat.
Input for Spark
 External store
HDFS, Hbase, Hive, S3 Cassandra, Ext3/Ext4, NTFS ..
Data formats
 CSV, Tablimited, TXT, MD
 Json
 SquenceFile
Input for Spark continue…
Spark File Based input
Spark’s file-based input methods, including textFile, support running
on directories, compressed files, and wildcards as well.
 Eg. you can use textFile("/my/directory"),
textFile("/my/directory/*.txt"), and textFile("/my/directory/*.gz").
The textFile method also takes an optional second argument for
controlling the number of partitions of the file.
 By default, Spark creates one partition for each HDFS block of the file,
but you can also ask for a higher number of partitions by passing a
larger value.
JavaRDD<String> distFile = sc.textFile("data.txt");
This is in contrast with textFile, which would return one record per line in
each file
Spark File Based input continue…
Benefits of spark
Fault recovery
In memory – processing
Scalable
Fast
Rich set of Library
Optimized
Unified tool set
Easy Programming- Spark and scala APIs are fairly high level
Spark “Word count”
Spark “Word count” continue…
The first thing a Spark program has to do is create a
SparkContext object,
 SparkContext represents a connection to a Spark cluster, and
can be used to create RDDs, accumulators and broadcast
variables on that cluster.
 To create a SparkContext, you first need to create a
SparkConf object to configure your application
// Create a Java Spark Context.
SparkConf conf = new SparkConf().setAppName("JavaWordCount");
//SparkConf conf = new
SparkConf().setAppName("org.sparkexample.WordCount").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
Spark “Word count” continue…
Create an RDD from a file
RDDs can be created from Hadoop InputFormats (such as
HDFS files) or by transforming other RDDs. The following
code uses the SparkContext to define a base RDD from the
file inputFile
Parallelized collections are created by calling
JavaSparkContext’s parallelize method on an existing
Collection in your driver program
// Create a Java Spark Context.
String inputFile = args[0];
JavaRDD input = sc.textFile(inputFile);
Spark “Word count” continue…
Transform input RDD with flatMap
To split the input text into separate words, we use the
flatMap(func) RDD transformation, which returns a new
RDD formed by passing each element of the source through
a function. The String split function is applied to each line of
text, returning an RDD of the words in the input RDD:
// map/split each line to multiple words
JavaRDD<String> words = input.flatMap( new
FlatMapFunction<String, String>() {
public Iterable<String> call(String x) {
return Arrays.asList(x.split(" "));
} } );
Spark “Word count” continue…
Transform words RDD with map
We use the map(func) to transform the words RDD into an
RDD of (word, 1) key-value pairs:
JavaPairRDD<String, Integer> wordOnePairs = words.mapToPair(
new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String x) {
return new Tuple2(x, 1); } } );
Transform wordOnePairs RDD with reduceByKey
To count the number of times each word occurs, we combine
the values (1) in the wordOnePairs with the same key (word)
using reduceByKey(func),
 This transformation will return an RDD of (word, count) pairs
where the values for each word are aggregated using the given
reduce function func x+y:
// reduce add the pairs by key to produce counts
JavaPairRDD<String, Integer> counts = wordOnePairs.reduceByKey (
new Function2<Integer, Integer, Integer>() {
public Integer call(Integer x, Integer y) {
return x + y; } } );
Spark “Word count” continue…
Output with RDD action saveAsTextFile
Finally, the RDD action saveAsTextFile(path) writes the
elements of the dataset as a text file (or set of text files) in
the outputFile directory
String outputFile = args[1];
// Save the word count back out to a text file, causing evaluation.
counts.saveAsTextFile(outputFile);
Spark “Word count” continue…
Running Your Application
You use the bin/spark-submit script to launch your application.
This script takes care of setting up the classpath with Spark
and its dependencies. Here is the spark-submit format:
$./bin/spark-submit --class <main-class> --master <master-url>
<application-jar> [application-arguments]
$bin/spark-submit --class example.wordcount.JavaWordCount --master yarn sparkwordcount-1.0.jar
/user/user01/input/alice.txt /user/user01/output
//Here is the spark-submit command to run the scala SparkWordCount:
$bin/spark-submit --class SparkWordCount --master yarn sparkwordcount-1.0.jar /user/user01/input/alice.txt
/user/user01/output
Spark vs Hadoop
Hello.. Spark Or Hadoop Which Is The Best Big Data
Framework ?
Hey…Spark has overtaken Hadoop as most active open
source Big Data project !.
The fact is they are not directly comparable products. Why ?
They do not perform exactly the same tasks, and they are
not mutually exclusive, as they are able to work together.
They provide some of the most popular tools used to carry
out common Big Data-related tasks.
Spark vs Hadoop continue…
Spark vs Hadoop continue…
Spark the edge over Hadoop is speed.
 Spark handles most of its operations and data “in memory” –
copying them from distributed physical storage into far faster
logical RAM memory.
 This reduces amount of time consuming writing/reading to
hard disk each level/phase, other end needs to be done under
Hadoop’s MapReduce system
 MapReduce writes all of the data back to the physical storage
medium after each operation
Spark support iterative, interactive and batch data
processing. Hadoop limited batch processing !
Spark vs Hadoop continue…
Although Spark is reported to work up to 100 times faster
than Hadoop in certain circumstances, but it does not
provide its own distributed storage system. Spark does not
include its own storage system for organizing files. Hadoop
has it!.
Spark’s advanced analytics applications can make use of
data stored using the HDFS in data processing layer.
Spark includes its own machine learning libraries, called
MLib, whereas Hadoop systems must be interfaced with a
other machine learning library, for example Apache
Mahout.
Spark vs Hadoop continue…
Apache Spark may only be the processing step in your ETL
(Extract, Transform, Load) chain. It doesn't provide the
stabled rich tool set that the Hadoop ecosystem contains.
You may still need Hbase/Nutch/Solr for data acquisition
Hadoop has wide ranges tools
 Sqoop and Flume for moving data; Oozie for scheduling; and
HBase, or Hive for storage.
 The point that I’m making is that although Apache Spark is a
very powerful processing system, it should be considered a
part of the wider Hadoop ecosystem
To summarize Hadoop and Spark are Perfect Together &
Spark fits in Hadoop data processing layer.
Both we can do better !!.
Spark is Heir to Mapreduce
MapReduce is not the best framework for all computations !
To perform complex operations, many Map and Reduce
phases must be strung together. It limited with respect to
complex and iterative operations
Spark support Varity of data sources. It is robust !.
Spark support iterative, interactive and batch data
processing. It is fast!.
It’s entirely possible to re-implement MapReduce-like
computations in Spark. It is easy!
When spark is not needed !
Your Big Data simply consists of a huge amount of very
structured data (i.e customer names and addresses) or may
have no need for the advanced streaming analytics and
machine learning functionality provided by Spark.
Spark, although developing very quickly, is still in its infancy,
and the security and support infrastructure is not as
advanced.
Who use Spark.
Spark is being adopted by major players like Amazon, eBay,
and Yahoo! Many organizations run Spark on clusters with
thousands of nodes. According to the Spark FAQ
Conclusion
Apache Spark is a cluster computing platform designed to be
fast, speed side and extends the popular MapReduce model to
efficiently support more types of computations, including
interactive queries and stream processing. Spark integrates
closely with other Big Data tools, this tight integration is the
ability to build applications that seamlessly combine different
processing models.
Spark is fit wide range (almost all) usecase because of its
versatility, integration and rich set different libraries.
People fall in love with spark
 Enterprises – fit for all, open source
 Mangers – less resource, productivity
 Developers – High level language
 Data scientist – Algorithms, simple API
References
 http://spark.apache.org/
 https://databricks.com/
 Learning Spark - O'Reilly Media
By Holden Karau, Andy Konwinski,
Patrick Wendell, Matei Zaharia
Thank You

Mais conteúdo relacionado

Mais procurados

Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
 
Introduction to PySpark
Introduction to PySparkIntroduction to PySpark
Introduction to PySparkRussell Jurney
 
Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark Aakashdata
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overviewDataArt
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsDatabricks
 
Introduction to spark
Introduction to sparkIntroduction to spark
Introduction to sparkHome
 
PySpark in practice slides
PySpark in practice slidesPySpark in practice slides
PySpark in practice slidesDat Tran
 
Apache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationApache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationDatabricks
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architectureSohil Jain
 
Spark streaming , Spark SQL
Spark streaming , Spark SQLSpark streaming , Spark SQL
Spark streaming , Spark SQLYousun Jeong
 
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...Simplilearn
 
Spark overview
Spark overviewSpark overview
Spark overviewLisa Hua
 
Programming in Spark using PySpark
Programming in Spark using PySpark      Programming in Spark using PySpark
Programming in Spark using PySpark Mostafa
 
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
 

Mais procurados (20)

Apache Spark Core
Apache Spark CoreApache Spark Core
Apache Spark Core
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
 
Introduction to PySpark
Introduction to PySparkIntroduction to PySpark
Introduction to PySpark
 
Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark
 
Apache spark
Apache sparkApache spark
Apache spark
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Spark
SparkSpark
Spark
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
Introduction to spark
Introduction to sparkIntroduction to spark
Introduction to spark
 
PySpark in practice slides
PySpark in practice slidesPySpark in practice slides
PySpark in practice slides
 
Apache Spark Overview
Apache Spark OverviewApache Spark Overview
Apache Spark Overview
 
Apache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationApache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper Optimization
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architecture
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Spark streaming , Spark SQL
Spark streaming , Spark SQLSpark streaming , Spark SQL
Spark streaming , Spark SQL
 
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
 
Spark overview
Spark overviewSpark overview
Spark overview
 
Programming in Spark using PySpark
Programming in Spark using PySpark      Programming in Spark using PySpark
Programming in Spark using PySpark
 
Apache Spark Components
Apache Spark ComponentsApache Spark Components
Apache Spark Components
 
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
 

Semelhante a Spark Introduction - An Overview of the Popular Big Data Processing Engine

Big Data Analytics and Ubiquitous computing
Big Data Analytics and Ubiquitous computingBig Data Analytics and Ubiquitous computing
Big Data Analytics and Ubiquitous computingAnimesh Chaturvedi
 
Lighting up Big Data Analytics with Apache Spark in Azure
Lighting up Big Data Analytics with Apache Spark in AzureLighting up Big Data Analytics with Apache Spark in Azure
Lighting up Big Data Analytics with Apache Spark in AzureJen Stirrup
 
In Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkIn Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkVenkata Naga Ravi
 
Apachespark 160612140708
Apachespark 160612140708Apachespark 160612140708
Apachespark 160612140708Srikrishna k
 
Learning spark ch09 - Spark SQL
Learning spark ch09 - Spark SQLLearning spark ch09 - Spark SQL
Learning spark ch09 - Spark SQLphanleson
 
Apache Spark Introduction.pdf
Apache Spark Introduction.pdfApache Spark Introduction.pdf
Apache Spark Introduction.pdfMaheshPandit16
 
Apache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & librariesApache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & librariesWalaa Hamdy Assy
 
Big data vahidamiri-tabriz-13960226-datastack.ir
Big data vahidamiri-tabriz-13960226-datastack.irBig data vahidamiri-tabriz-13960226-datastack.ir
Big data vahidamiri-tabriz-13960226-datastack.irdatastack
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonBenjamin Bengfort
 
Apache spark sneha challa- google pittsburgh-aug 25th
Apache spark  sneha challa- google pittsburgh-aug 25thApache spark  sneha challa- google pittsburgh-aug 25th
Apache spark sneha challa- google pittsburgh-aug 25thSneha Challa
 
An Introduction to Apache Spark
An Introduction to Apache SparkAn Introduction to Apache Spark
An Introduction to Apache SparkElvis Saravia
 
Azure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkAzure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkIke Ellis
 
Cassandra Lunch #89: Semi-Structured Data in Cassandra
Cassandra Lunch #89: Semi-Structured Data in CassandraCassandra Lunch #89: Semi-Structured Data in Cassandra
Cassandra Lunch #89: Semi-Structured Data in CassandraAnant Corporation
 

Semelhante a Spark Introduction - An Overview of the Popular Big Data Processing Engine (20)

Spark core
Spark coreSpark core
Spark core
 
Big Data Analytics and Ubiquitous computing
Big Data Analytics and Ubiquitous computingBig Data Analytics and Ubiquitous computing
Big Data Analytics and Ubiquitous computing
 
Lighting up Big Data Analytics with Apache Spark in Azure
Lighting up Big Data Analytics with Apache Spark in AzureLighting up Big Data Analytics with Apache Spark in Azure
Lighting up Big Data Analytics with Apache Spark in Azure
 
Meetup ml spark_ppt
Meetup ml spark_pptMeetup ml spark_ppt
Meetup ml spark_ppt
 
In Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkIn Memory Analytics with Apache Spark
In Memory Analytics with Apache Spark
 
Bds session 13 14
Bds session 13 14Bds session 13 14
Bds session 13 14
 
Apachespark 160612140708
Apachespark 160612140708Apachespark 160612140708
Apachespark 160612140708
 
Apache spark
Apache sparkApache spark
Apache spark
 
Learning spark ch09 - Spark SQL
Learning spark ch09 - Spark SQLLearning spark ch09 - Spark SQL
Learning spark ch09 - Spark SQL
 
SPARK ARCHITECTURE
SPARK ARCHITECTURESPARK ARCHITECTURE
SPARK ARCHITECTURE
 
Apache Spark Introduction.pdf
Apache Spark Introduction.pdfApache Spark Introduction.pdf
Apache Spark Introduction.pdf
 
Apache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & librariesApache spark - Architecture , Overview & libraries
Apache spark - Architecture , Overview & libraries
 
Big data vahidamiri-tabriz-13960226-datastack.ir
Big data vahidamiri-tabriz-13960226-datastack.irBig data vahidamiri-tabriz-13960226-datastack.ir
Big data vahidamiri-tabriz-13960226-datastack.ir
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and Python
 
Apache spark sneha challa- google pittsburgh-aug 25th
Apache spark  sneha challa- google pittsburgh-aug 25thApache spark  sneha challa- google pittsburgh-aug 25th
Apache spark sneha challa- google pittsburgh-aug 25th
 
SparkPaper
SparkPaperSparkPaper
SparkPaper
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
An Introduction to Apache Spark
An Introduction to Apache SparkAn Introduction to Apache Spark
An Introduction to Apache Spark
 
Azure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkAzure Databricks is Easier Than You Think
Azure Databricks is Easier Than You Think
 
Cassandra Lunch #89: Semi-Structured Data in Cassandra
Cassandra Lunch #89: Semi-Structured Data in CassandraCassandra Lunch #89: Semi-Structured Data in Cassandra
Cassandra Lunch #89: Semi-Structured Data in Cassandra
 

Último

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 

Último (20)

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 

Spark Introduction - An Overview of the Popular Big Data Processing Engine

  • 2. Any one who want know about spark. No specific prerequists are required. It is not a tutorial to learn spark !. Intension of presantation is to introduce spark and an overview in general users prospective. We are not going to cover any concepts of specific to developer/programming or adminstrative aspects. Audience & Intension Sudhakara.st Mail:sudhakara.st@gmail.com https://in.linkedin.com/in/sudhakara-st-82820539
  • 3. Agenda Introduction to Spark Spark What leads to spark trending Spark components. Resilient Distributed Dataset(RDD) Input to spark Benefits for spark Spark “Word count” example Spark VS Hadoop. Conclusion.
  • 4. Credits Content and images source  http://spark.apache.org/  https://databricks.com/  Learning Spark - O'Reilly Media By Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia
  • 5. Apache : Spark™ is a fast and general engine for large-scale data processing. Datairicks: Spark™ is a powerful open source processing engine built around speed, ease of use, and sophisticated analytics. Spark is open source distributed computing engine for data processing and data analytics. It was originally developed at UC Berkeley in 2009
  • 6. What leads to Spark trending !. Just in time dataware house Today enterprise have variety of data realtime, streaming, batch and analytics. Spark is designed for that. Big data is versatile. Spark execution engine in handles versatility, its every growing library help for that . Spark bring data processing, analyze and analytics brings to one platform.  Spark significantly simplifies Bigdata processing. Hosts end to end platform. Ingest to product
  • 7. What leads to Spark trending !. Continue..  Spark support wide range of ecosystem & apps
  • 8. Spark friendly ! Apache Spark is a general-purpose, distribute cluster computing, data processing framework that, like MapReduce in Apache Hadoop, offers powerful abstractions for processing large dataset Apache Spark is designed to work seamlessly with Hadoop*, Amazon S3, Cassandra or as a standalone application. Support languages: Rich set High level APIs and increases user productivity Integration with new & existing system.
  • 9. Spark friendly ! continue…
  • 11. Spark Components continue… The Spark core is complemented by a set of powerful, higher-level libraries  SparkSQL  Spark Streaming,  MLlib (for machine learning)  GraphX Scala, Java, Python the language in which Spark is written.
  • 12. Spark Core Spark Core is the base engine for large-scale parallel and distributed data processing. It is responsible for:  memory management and fault recovery  scheduling, distributing and monitoring tasks, jobs on a cluster  interacting with storage systems
  • 13. SparkSQL SparkSQL is a Spark component that supports querying data either via SQL or via the Hive Query Language. It originated as the Apache Hive port to run on top of Spark (in place of MapReduce) and is now integrated with the Spark stack. In addition to providing support for various data sources, it makes it possible to weave SQL queries with code transformations which results in a very powerful tool. Below is an example of a Hive compatible query: // sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)") sqlContext.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src") // Queries are expressed in HiveQL sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)
  • 14. Spark Streaming Spark Streaming supports real time processing of streaming data, Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. Data can be ingested: Kafka, Flume, Twitter, ZeroMQ, Kinesis or TCP sockets Processed data can be pushed out to filesystems, databases, and live dashboards. In fact, you can apply Spark’s machine learning and graph processing algorithms on data streams
  • 15. Spark Streaming continue.. Spark Streaming receives live input data streams and divides the data into batches, which are then processed by the Spark engine to generate the final stream of results in batches
  • 16. MLlib MLlib is a machine learning library that provides various algorithms designed to scale out on a cluster for classification, regression, clustering, collaborative filtering, and so on (check out Toptal’s article on machine learning for more information on that topic). These algorithms also work with streaming data, such as linear regression using ordinary least squares or k-means clustering (and more on the way). Apache Mahout (a machine learning library for Hadoop) has already turned away from MapReduce and joined forces on Spark MLlib.
  • 17. Resilient Distributed Dataset(RDD) Spark introduces the concept of an RDD , an immutable fault-tolerant, distributed collection of objects that can be operated on in parallel. RDD can contain any type of object and is created by loading an external dataset or distributing a collection from the driver program. RDDs support two types of operations:  Transformations : transform one data collection into another (such as map, filter, join, union, and so on), that are performed on an RDD and which yield a new RDD containing the result. Means create a new dataset from an existing one  Actions : require that the computation be performed (such as reduce, count, first, collect, save and so on) that return a value after running a computation on an RDD. which return a value to the driver program or file after running a computation on the dataset.
  • 18. Resilient Distributed Dataset continue.. RDD which is a fault-tolerant collection of elements/partitions that can be operated on in parallel across the nodes. Properties for RDD:  Immutability  Cacheable – linage – persist  Lazy evaluation (it different than execution)  Type Inferred Two ways to create RDDs: parallelizing an existing collection in your driver program, or referencing a dataset in an external storage system, such as a shared filesystem, HDFS, Hbase, S3, Cassandra or any data source offering a Hadoop InputFormat.
  • 19. Input for Spark  External store HDFS, Hbase, Hive, S3 Cassandra, Ext3/Ext4, NTFS .. Data formats  CSV, Tablimited, TXT, MD  Json  SquenceFile
  • 20. Input for Spark continue… Spark File Based input Spark’s file-based input methods, including textFile, support running on directories, compressed files, and wildcards as well.  Eg. you can use textFile("/my/directory"), textFile("/my/directory/*.txt"), and textFile("/my/directory/*.gz"). The textFile method also takes an optional second argument for controlling the number of partitions of the file.  By default, Spark creates one partition for each HDFS block of the file, but you can also ask for a higher number of partitions by passing a larger value. JavaRDD<String> distFile = sc.textFile("data.txt"); This is in contrast with textFile, which would return one record per line in each file
  • 21. Spark File Based input continue…
  • 22. Benefits of spark Fault recovery In memory – processing Scalable Fast Rich set of Library Optimized Unified tool set Easy Programming- Spark and scala APIs are fairly high level
  • 24. Spark “Word count” continue… The first thing a Spark program has to do is create a SparkContext object,  SparkContext represents a connection to a Spark cluster, and can be used to create RDDs, accumulators and broadcast variables on that cluster.  To create a SparkContext, you first need to create a SparkConf object to configure your application // Create a Java Spark Context. SparkConf conf = new SparkConf().setAppName("JavaWordCount"); //SparkConf conf = new SparkConf().setAppName("org.sparkexample.WordCount").setMaster("local"); JavaSparkContext sc = new JavaSparkContext(conf);
  • 25. Spark “Word count” continue… Create an RDD from a file RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. The following code uses the SparkContext to define a base RDD from the file inputFile Parallelized collections are created by calling JavaSparkContext’s parallelize method on an existing Collection in your driver program // Create a Java Spark Context. String inputFile = args[0]; JavaRDD input = sc.textFile(inputFile);
  • 26. Spark “Word count” continue… Transform input RDD with flatMap To split the input text into separate words, we use the flatMap(func) RDD transformation, which returns a new RDD formed by passing each element of the source through a function. The String split function is applied to each line of text, returning an RDD of the words in the input RDD: // map/split each line to multiple words JavaRDD<String> words = input.flatMap( new FlatMapFunction<String, String>() { public Iterable<String> call(String x) { return Arrays.asList(x.split(" ")); } } );
  • 27. Spark “Word count” continue… Transform words RDD with map We use the map(func) to transform the words RDD into an RDD of (word, 1) key-value pairs: JavaPairRDD<String, Integer> wordOnePairs = words.mapToPair( new PairFunction<String, String, Integer>() { public Tuple2<String, Integer> call(String x) { return new Tuple2(x, 1); } } );
  • 28. Transform wordOnePairs RDD with reduceByKey To count the number of times each word occurs, we combine the values (1) in the wordOnePairs with the same key (word) using reduceByKey(func),  This transformation will return an RDD of (word, count) pairs where the values for each word are aggregated using the given reduce function func x+y: // reduce add the pairs by key to produce counts JavaPairRDD<String, Integer> counts = wordOnePairs.reduceByKey ( new Function2<Integer, Integer, Integer>() { public Integer call(Integer x, Integer y) { return x + y; } } );
  • 29. Spark “Word count” continue… Output with RDD action saveAsTextFile Finally, the RDD action saveAsTextFile(path) writes the elements of the dataset as a text file (or set of text files) in the outputFile directory String outputFile = args[1]; // Save the word count back out to a text file, causing evaluation. counts.saveAsTextFile(outputFile);
  • 30. Spark “Word count” continue… Running Your Application You use the bin/spark-submit script to launch your application. This script takes care of setting up the classpath with Spark and its dependencies. Here is the spark-submit format: $./bin/spark-submit --class <main-class> --master <master-url> <application-jar> [application-arguments] $bin/spark-submit --class example.wordcount.JavaWordCount --master yarn sparkwordcount-1.0.jar /user/user01/input/alice.txt /user/user01/output //Here is the spark-submit command to run the scala SparkWordCount: $bin/spark-submit --class SparkWordCount --master yarn sparkwordcount-1.0.jar /user/user01/input/alice.txt /user/user01/output
  • 31. Spark vs Hadoop Hello.. Spark Or Hadoop Which Is The Best Big Data Framework ? Hey…Spark has overtaken Hadoop as most active open source Big Data project !. The fact is they are not directly comparable products. Why ? They do not perform exactly the same tasks, and they are not mutually exclusive, as they are able to work together. They provide some of the most popular tools used to carry out common Big Data-related tasks.
  • 32. Spark vs Hadoop continue…
  • 33. Spark vs Hadoop continue… Spark the edge over Hadoop is speed.  Spark handles most of its operations and data “in memory” – copying them from distributed physical storage into far faster logical RAM memory.  This reduces amount of time consuming writing/reading to hard disk each level/phase, other end needs to be done under Hadoop’s MapReduce system  MapReduce writes all of the data back to the physical storage medium after each operation Spark support iterative, interactive and batch data processing. Hadoop limited batch processing !
  • 34. Spark vs Hadoop continue… Although Spark is reported to work up to 100 times faster than Hadoop in certain circumstances, but it does not provide its own distributed storage system. Spark does not include its own storage system for organizing files. Hadoop has it!. Spark’s advanced analytics applications can make use of data stored using the HDFS in data processing layer. Spark includes its own machine learning libraries, called MLib, whereas Hadoop systems must be interfaced with a other machine learning library, for example Apache Mahout.
  • 35. Spark vs Hadoop continue… Apache Spark may only be the processing step in your ETL (Extract, Transform, Load) chain. It doesn't provide the stabled rich tool set that the Hadoop ecosystem contains. You may still need Hbase/Nutch/Solr for data acquisition Hadoop has wide ranges tools  Sqoop and Flume for moving data; Oozie for scheduling; and HBase, or Hive for storage.  The point that I’m making is that although Apache Spark is a very powerful processing system, it should be considered a part of the wider Hadoop ecosystem To summarize Hadoop and Spark are Perfect Together & Spark fits in Hadoop data processing layer. Both we can do better !!.
  • 36. Spark is Heir to Mapreduce MapReduce is not the best framework for all computations ! To perform complex operations, many Map and Reduce phases must be strung together. It limited with respect to complex and iterative operations Spark support Varity of data sources. It is robust !. Spark support iterative, interactive and batch data processing. It is fast!. It’s entirely possible to re-implement MapReduce-like computations in Spark. It is easy!
  • 37. When spark is not needed ! Your Big Data simply consists of a huge amount of very structured data (i.e customer names and addresses) or may have no need for the advanced streaming analytics and machine learning functionality provided by Spark. Spark, although developing very quickly, is still in its infancy, and the security and support infrastructure is not as advanced.
  • 38. Who use Spark. Spark is being adopted by major players like Amazon, eBay, and Yahoo! Many organizations run Spark on clusters with thousands of nodes. According to the Spark FAQ
  • 39. Conclusion Apache Spark is a cluster computing platform designed to be fast, speed side and extends the popular MapReduce model to efficiently support more types of computations, including interactive queries and stream processing. Spark integrates closely with other Big Data tools, this tight integration is the ability to build applications that seamlessly combine different processing models. Spark is fit wide range (almost all) usecase because of its versatility, integration and rich set different libraries. People fall in love with spark  Enterprises – fit for all, open source  Mangers – less resource, productivity  Developers – High level language  Data scientist – Algorithms, simple API
  • 40. References  http://spark.apache.org/  https://databricks.com/  Learning Spark - O'Reilly Media By Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia
  • 41.