SlideShare uma empresa Scribd logo
1 de 88
streams
Agenda
• Reactive Streams
• Why Akka Streams?
• API Overview
Reactive Streams
public interface Publisher<T> {
public void subscribe(Subscriber<? super T> s);
}
public interface Subscriber<T> {
public void onSubscribe(Subscription s);
public void onNext(T t);
public void onError(Throwable t);
public void onComplete();
}
public interface Processor<T, R> extends Subscriber<T>, Publisher<R> {
}
public interface Subscription {
public void request(long n);
public void cancel();
}
Reactive Streams
A standardised spec/contract to achieve
asynchronous
back-pressured stream processing.
Standardised ?
Gives us consistent interop between libraries and
platforms that implement this spec.
everything is async & back-pressured
Reactive Streams
Stream API Stream API Stream API
Reactive Streams
Stream API Stream API Stream API
Users use this API
Reactive Streams
Stream API Stream API Stream API
Users use this API
Library authors use this API
Async?
• We know async IO from last week
• But there are other types of async operations, that
cross over different async boundaries
• between applications
• between threads
• and over the network as we saw
Back-Pressured ?
Publisher[T] Subscriber[T]
Think abstractly about these lines.
“async boundary”
This can be the network, or threads on the same CPU.
Publisher[T] Subscriber[T]
What problem are we trying
to solve?
Discrepancy in the rate of processing
• Fast Publisher / Slow Subscriber
• Slow Publisher / Fast Subscriber
Push Model
Publisher[T] Subscriber[T]
100 messages /
1 second
1 message /
1second
Fast Slow
Publisher[T] Subscriber[T]
Publisher[T] Subscriber[T]
drop overflowed
require resending
Publisher[T] Subscriber[T]
has to keep track
of messages to resend
not safe & complicated
NACK ?
Publisher[T] Subscriber[T]
Publisher[T] Subscriber[T]
stop!
Publisher[T] Subscriber[T]
stop!
Publisher[T] Subscriber[T]
stop!
sh#t!
Publisher[T] Subscriber[T]
publisher didn’t receive NACK in time
so we lost that last message
not safe
Pull ?
Publisher[T] Subscriber[T]
100 messages /
1 second
1 message /
1second
FastSlow
Publisher[T] Subscriber[T]
gimme!
Publisher[T] Subscriber[T]
gimme!
Publisher[T] Subscriber[T]
Publisher[T] Subscriber[T]
gimme!
Publisher[T] Subscriber[T]
gimme!
Publisher[T] Subscriber[T]
gimme!
Publisher[T] Subscriber[T]
gimme!
Publisher[T] Subscriber[T]
gimme!
Publisher[T] Subscriber[T]
gimme!
• Spam!
• Redundant messaging -> flooding the connection
• No buffer/batch support
A different approach
We have to take into account the following scenarios:
• Fast Pub / Slow Sub
• Slow Pub / Fast Sub
Which can happen dynamically
Publisher[T] Subscriber[T]
Data
Demand(n)
Publisher[T] Subscriber[T]
Data
Demand(n)
Dynamic Push/Pull
bounded buffers with no overflow
demand can be accumulated
batch processing -> performance
• Cool let’s implement this using Actors!
• We can, it’s possible … but should it be done ?
The problem(s) with Akka Actors
Type Safety
Any => Unit
Composition
In FP this makes us warm and fuzzy
val f: A => B
val g: B => C
val h: A => C = f andThen g
• Using Actors?
• An Actor is aware of who sent it messages and where it
must forward/reply them.
• No compositionality without thinking about it explicitly.
Data Flow
• What are streams ? Flows of data.
• Imagine a 10 stage data pipeline you want to model
• Now imagine writing that in Actors.
• Following the flow of data in Actors requires
jumping around all over the code base
• Low level, error prone and hard to reason about
Akka Streams API
building blocks
Design Philosophy
• Everything we will cover now are blueprints that
describe the actions/effects they perform.
• Reusability
• Compositionality
• “Design your program with a pure functional core,
push side-effects to the end of the world and
detonate to execute.
- some guy on stackoverflow
• Publisher of data
• Exactly one output
Image from boldradius.com
val singleSrc = Source.single(1)
val iteratorSrc = Source.fromIterator(() => Iterator from 0)
val futureSrc = Source.fromFuture(Future("abc"))
val collectionSrc = Source(List(1,2,3))
val tickSrc = Source.tick(
initialDelay = 1 second,
interval = 1 second,
tick = "tick-tock")
val requestSource = req.entity.dataBytes
• Subscriber (consumer) of data
• Describes where the data in our stream will go.
• Exactly one input
Image from boldradius.com
Sink.head
Sink.reduce[Int]((a, b) => a + b)
Sink.fold[Int, Int](0)(_ + _)
Sink.foreach[String](println)
FileIO.toPath(Paths.get("file.txt"))
val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _)
val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _)
Input type
val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _)
Materialized type
val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _)
Materialized type
Available when the stream ‘completes’
val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _)
val futureRes: Future[Int] = Source(1 to 10).runWith(fold)
futureRes.foreach(println)
// 55
So I can get data from somewhere
and I can put data somewhere else.
But I want to do something with it.
• A processor of data
• Has one input and one output
Image from boldradius.com
val double: Flow[Int, Int, NotUsed] = Flow[Int].map(_ * 2)
val src = Source(1 to 10)
val double = Flow[Int].map(_ * 2)
val negate = Flow[Int].map(_ * -1)
val print = Sink.foreach[Int](println)
val graph = src via double via negate to print
graph.run()
-2
-4
-6
-8
-10
-12
-14
-16
-18
-20
• Flow is immutable, thread-safe, and thus
freely shareable
• Are Linear flows enough ?
• No, we want to be able to describe arbitrarilly
complex steps in our pipelines
Graphs
Flow
Graph
• We define multiple linear flows and then use the
Graph DSL to connect them.
• We can combine multiple streams - fan in
• Split a stream into substreams - fan out
Fan-Out
Fan-In
A little example
Some sort of video uploading service
- Stream in video
- Process it
- Store it
bcast
ByteString
Convert to
Array[Byte]
flow
bcast
Process High
Res flow
Process Low
Res flow
Process Med
Res flow
sink
sink
sink
Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b =>
(highSink, mediumSink, lowSink) => {
import GraphDSL.Implicits._
val bcastInput = b.add(Broadcast[ByteString](1))
val bcastRawBytes = b.add(Broadcast[Array[Byte]](3))
val processHigh: Flow[Array[Byte], ByteString, NotUsed]
val processMedium: Flow[Array[Byte], ByteString, NotUsed]
val processLow: Flow[Array[Byte], ByteString, NotUsed]
bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink
bcastRawBytes ~> processMedium ~> mediumSink
bcastRawBytes ~> processLow ~> lowSink
SinkShape(bcastInput.in)
}
})
Our custom Sink
Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b =>
(highSink, mediumSink, lowSink) => {
import GraphDSL.Implicits._
val bcastInput = b.add(Broadcast[ByteString](1))
val bcastRawBytes = b.add(Broadcast[Array[Byte]](3))
val processHigh: Flow[Array[Byte], ByteString, NotUsed]
val processMedium: Flow[Array[Byte], ByteString, NotUsed]
val processLow: Flow[Array[Byte], ByteString, NotUsed]
bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink
bcastRawBytes ~> processMedium ~> mediumSink
bcastRawBytes ~> processLow ~> lowSink
SinkShape(bcastInput.in)
}
})
Has one input of type ByteString
Takes 3 Sinks, which can be Files, DBs, etc.
Has one input of type ByteString
Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b =>
(highSink, mediumSink, lowSink) => {
import GraphDSL.Implicits._
val bcastInput = b.add(Broadcast[ByteString](1))
val bcastRawBytes = b.add(Broadcast[Array[Byte]](3))
val processHigh: Flow[Array[Byte], ByteString, NotUsed]
val processMedium: Flow[Array[Byte], ByteString, NotUsed]
val processLow: Flow[Array[Byte], ByteString, NotUsed]
bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink
bcastRawBytes ~> processMedium ~> mediumSink
bcastRawBytes ~> processLow ~> lowSink
SinkShape(bcastInput.in)
}
})
Describes 3 processing stages
That are Flows of Array[Byte] => ByteString
Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b =>
(highSink, mediumSink, lowSink) => {
import GraphDSL.Implicits._
val bcastInput = b.add(Broadcast[ByteString](1))
val bcastRawBytes = b.add(Broadcast[Array[Byte]](3))
val processHigh: Flow[Array[Byte], ByteString, NotUsed]
val processMedium: Flow[Array[Byte], ByteString, NotUsed]
val processLow: Flow[Array[Byte], ByteString, NotUsed]
bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink
bcastRawBytes ~> processMedium ~> mediumSink
bcastRawBytes ~> processLow ~> lowSink
SinkShape(bcastInput.in)
}
})
Has one input of type ByteString
Takes 3 Sinks, which can be Files, DBs, etc.
Describes 3 processing stages
That are Flows of Array[Byte] => ByteString
Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b =>
(highSink, mediumSink, lowSink) => {
import GraphDSL.Implicits._
val bcastInput = b.add(Broadcast[ByteString](1))
val bcastRawBytes = b.add(Broadcast[Array[Byte]](3))
val processHigh: Flow[Array[Byte], ByteString, NotUsed]
val processMedium: Flow[Array[Byte], ByteString, NotUsed]
val processLow: Flow[Array[Byte], ByteString, NotUsed]
bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink
bcastRawBytes ~> processMedium ~> mediumSink
bcastRawBytes ~> processLow ~> lowSink
SinkShape(bcastInput.in)
}
})
Has one input of type ByteString
Emits result to the 3 Sinks
Takes 3 Sinks, which can be Files, DBs, etc.
Has a type of:
Sink[ByteString, (Future[IOResult], Future[IOResult], Future[IOResult])]
Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b =>
(highSink, mediumSink, lowSink) => {
import GraphDSL.Implicits._
val bcastInput = b.add(Broadcast[ByteString](1))
val bcastRawBytes = b.add(Broadcast[Array[Byte]](3))
val processHigh: Flow[Array[Byte], ByteString, NotUsed]
val processMedium: Flow[Array[Byte], ByteString, NotUsed]
val processLow: Flow[Array[Byte], ByteString, NotUsed]
bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink
bcastRawBytes ~> processMedium ~> mediumSink
bcastRawBytes ~> processLow ~> lowSink
SinkShape(bcastInput.in)
}
})
Sink[ByteString, (Future[IOResult], Future[IOResult], Future[IOResult])]
Materialized values
Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b =>
(highSink, mediumSink, lowSink) => {
import GraphDSL.Implicits._
val bcastInput = b.add(Broadcast[ByteString](1))
val bcastRawBytes = b.add(Broadcast[Array[Byte]](3))
val processHigh: Flow[Array[Byte], ByteString, NotUsed]
val processMedium: Flow[Array[Byte], ByteString, NotUsed]
val processLow: Flow[Array[Byte], ByteString, NotUsed]
bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink
bcastRawBytes ~> processMedium ~> mediumSink
bcastRawBytes ~> processLow ~> lowSink
SinkShape(bcastInput.in)
}
})
Things we didn’t have time
for
• Integrating with Actors
• Buffering and throttling streams
• Defining custom Graph shapes and stages
Thanks for listening!

Mais conteúdo relacionado

Mais procurados

Reactive cocoa made Simple with Swift
Reactive cocoa made Simple with SwiftReactive cocoa made Simple with Swift
Reactive cocoa made Simple with SwiftColin Eberhardt
 
Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming Flink Forward
 
Apache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink MeetupApache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink MeetupStephan Ewen
 
How to Think in RxJava Before Reacting
How to Think in RxJava Before ReactingHow to Think in RxJava Before Reacting
How to Think in RxJava Before ReactingIndicThreads
 
Flink Batch Processing and Iterations
Flink Batch Processing and IterationsFlink Batch Processing and Iterations
Flink Batch Processing and IterationsSameer Wadkar
 
Michael Häusler – Everyday flink
Michael Häusler – Everyday flinkMichael Häusler – Everyday flink
Michael Häusler – Everyday flinkFlink Forward
 
Extending Flux - Writing Your Own Functions by Adam Anthony
Extending Flux - Writing Your Own Functions by Adam AnthonyExtending Flux - Writing Your Own Functions by Adam Anthony
Extending Flux - Writing Your Own Functions by Adam AnthonyInfluxData
 
Functional programming in Javascript
Functional programming in JavascriptFunctional programming in Javascript
Functional programming in JavascriptKnoldus Inc.
 
Flink Streaming Berlin Meetup
Flink Streaming Berlin MeetupFlink Streaming Berlin Meetup
Flink Streaming Berlin MeetupMárton Balassi
 
CBStreams - Java Streams for ColdFusion (CFML)
CBStreams - Java Streams for ColdFusion (CFML)CBStreams - Java Streams for ColdFusion (CFML)
CBStreams - Java Streams for ColdFusion (CFML)Ortus Solutions, Corp
 
Behm Shah Pagerank
Behm Shah PagerankBehm Shah Pagerank
Behm Shah Pagerankgothicane
 
Scala clojure techday_2011
Scala clojure techday_2011Scala clojure techday_2011
Scala clojure techday_2011Thadeu Russo
 
Apache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsApache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsFlink Forward
 
Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Flink Forward
 
Machine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning GroupMachine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning GroupTill Rohrmann
 
Apache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CA
Apache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CAApache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CA
Apache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CARobert Metzger
 
Flink Gelly - Karlsruhe - June 2015
Flink Gelly - Karlsruhe - June 2015Flink Gelly - Karlsruhe - June 2015
Flink Gelly - Karlsruhe - June 2015Andra Lungu
 
Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...
Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...
Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...ucelebi
 

Mais procurados (20)

Reactive cocoa made Simple with Swift
Reactive cocoa made Simple with SwiftReactive cocoa made Simple with Swift
Reactive cocoa made Simple with Swift
 
cb streams - gavin pickin
cb streams - gavin pickincb streams - gavin pickin
cb streams - gavin pickin
 
Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming
 
Apache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink MeetupApache Flink @ NYC Flink Meetup
Apache Flink @ NYC Flink Meetup
 
How to Think in RxJava Before Reacting
How to Think in RxJava Before ReactingHow to Think in RxJava Before Reacting
How to Think in RxJava Before Reacting
 
Flink Batch Processing and Iterations
Flink Batch Processing and IterationsFlink Batch Processing and Iterations
Flink Batch Processing and Iterations
 
Michael Häusler – Everyday flink
Michael Häusler – Everyday flinkMichael Häusler – Everyday flink
Michael Häusler – Everyday flink
 
Extending Flux - Writing Your Own Functions by Adam Anthony
Extending Flux - Writing Your Own Functions by Adam AnthonyExtending Flux - Writing Your Own Functions by Adam Anthony
Extending Flux - Writing Your Own Functions by Adam Anthony
 
Functional programming in Javascript
Functional programming in JavascriptFunctional programming in Javascript
Functional programming in Javascript
 
Flink Streaming Berlin Meetup
Flink Streaming Berlin MeetupFlink Streaming Berlin Meetup
Flink Streaming Berlin Meetup
 
CBStreams - Java Streams for ColdFusion (CFML)
CBStreams - Java Streams for ColdFusion (CFML)CBStreams - Java Streams for ColdFusion (CFML)
CBStreams - Java Streams for ColdFusion (CFML)
 
Behm Shah Pagerank
Behm Shah PagerankBehm Shah Pagerank
Behm Shah Pagerank
 
Scala clojure techday_2011
Scala clojure techday_2011Scala clojure techday_2011
Scala clojure techday_2011
 
Apache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsApache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API Basics
 
Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced
 
Machine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning GroupMachine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning Group
 
Apache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CA
Apache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CAApache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CA
Apache Flink Deep-Dive @ Hadoop Summit 2015 in San Jose, CA
 
Gpars workshop
Gpars workshopGpars workshop
Gpars workshop
 
Flink Gelly - Karlsruhe - June 2015
Flink Gelly - Karlsruhe - June 2015Flink Gelly - Karlsruhe - June 2015
Flink Gelly - Karlsruhe - June 2015
 
Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...
Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...
Apache Flink Internals: Stream & Batch Processing in One System – Apache Flin...
 

Semelhante a Intro to Akka Streams

Writing Asynchronous Programs with Scala & Akka
Writing Asynchronous Programs with Scala & AkkaWriting Asynchronous Programs with Scala & Akka
Writing Asynchronous Programs with Scala & AkkaYardena Meymann
 
Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014Konrad Malawski
 
PSUG #52 Dataflow and simplified reactive programming with Akka-streams
PSUG #52 Dataflow and simplified reactive programming with Akka-streamsPSUG #52 Dataflow and simplified reactive programming with Akka-streams
PSUG #52 Dataflow and simplified reactive programming with Akka-streamsStephane Manciot
 
Reactive Stream Processing with Akka Streams
Reactive Stream Processing with Akka StreamsReactive Stream Processing with Akka Streams
Reactive Stream Processing with Akka StreamsKonrad Malawski
 
Journey into Reactive Streams and Akka Streams
Journey into Reactive Streams and Akka StreamsJourney into Reactive Streams and Akka Streams
Journey into Reactive Streams and Akka StreamsKevin Webber
 
Kyo - Functional Scala 2023.pdf
Kyo - Functional Scala 2023.pdfKyo - Functional Scala 2023.pdf
Kyo - Functional Scala 2023.pdfFlavio W. Brasil
 
Stream processing from single node to a cluster
Stream processing from single node to a clusterStream processing from single node to a cluster
Stream processing from single node to a clusterGal Marder
 
Productionizing your Streaming Jobs
Productionizing your Streaming JobsProductionizing your Streaming Jobs
Productionizing your Streaming JobsDatabricks
 
Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...
Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...
Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...HostedbyConfluent
 
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...Lightbend
 
Akka Streams and HTTP
Akka Streams and HTTPAkka Streams and HTTP
Akka Streams and HTTPRoland Kuhn
 
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
Wprowadzenie do technologii Big Data / Intro to Big Data EcosystemWprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
Wprowadzenie do technologii Big Data / Intro to Big Data EcosystemSages
 
Using akka streams to access s3 objects
Using akka streams to access s3 objectsUsing akka streams to access s3 objects
Using akka streams to access s3 objectsMikhail Girkin
 
Reactive Streams 1.0 and Akka Streams
Reactive Streams 1.0 and Akka StreamsReactive Streams 1.0 and Akka Streams
Reactive Streams 1.0 and Akka StreamsDean Wampler
 
Lambda at Weather Scale by Robbie Strickland
Lambda at Weather Scale by Robbie StricklandLambda at Weather Scale by Robbie Strickland
Lambda at Weather Scale by Robbie StricklandSpark Summit
 
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...Igalia
 
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Dan Halperin
 

Semelhante a Intro to Akka Streams (20)

Writing Asynchronous Programs with Scala & Akka
Writing Asynchronous Programs with Scala & AkkaWriting Asynchronous Programs with Scala & Akka
Writing Asynchronous Programs with Scala & Akka
 
Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014
 
PSUG #52 Dataflow and simplified reactive programming with Akka-streams
PSUG #52 Dataflow and simplified reactive programming with Akka-streamsPSUG #52 Dataflow and simplified reactive programming with Akka-streams
PSUG #52 Dataflow and simplified reactive programming with Akka-streams
 
Reactive Stream Processing with Akka Streams
Reactive Stream Processing with Akka StreamsReactive Stream Processing with Akka Streams
Reactive Stream Processing with Akka Streams
 
So you think you can stream.pptx
So you think you can stream.pptxSo you think you can stream.pptx
So you think you can stream.pptx
 
Journey into Reactive Streams and Akka Streams
Journey into Reactive Streams and Akka StreamsJourney into Reactive Streams and Akka Streams
Journey into Reactive Streams and Akka Streams
 
Kyo - Functional Scala 2023.pdf
Kyo - Functional Scala 2023.pdfKyo - Functional Scala 2023.pdf
Kyo - Functional Scala 2023.pdf
 
Stream processing from single node to a cluster
Stream processing from single node to a clusterStream processing from single node to a cluster
Stream processing from single node to a cluster
 
Productionizing your Streaming Jobs
Productionizing your Streaming JobsProductionizing your Streaming Jobs
Productionizing your Streaming Jobs
 
Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...
Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...
Building Kafka Connectors with Kotlin: A Step-by-Step Guide to Creation and D...
 
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...
 
Akka Streams and HTTP
Akka Streams and HTTPAkka Streams and HTTP
Akka Streams and HTTP
 
Time for Functions
Time for FunctionsTime for Functions
Time for Functions
 
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
Wprowadzenie do technologii Big Data / Intro to Big Data EcosystemWprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
 
Data Pipeline at Tapad
Data Pipeline at TapadData Pipeline at Tapad
Data Pipeline at Tapad
 
Using akka streams to access s3 objects
Using akka streams to access s3 objectsUsing akka streams to access s3 objects
Using akka streams to access s3 objects
 
Reactive Streams 1.0 and Akka Streams
Reactive Streams 1.0 and Akka StreamsReactive Streams 1.0 and Akka Streams
Reactive Streams 1.0 and Akka Streams
 
Lambda at Weather Scale by Robbie Strickland
Lambda at Weather Scale by Robbie StricklandLambda at Weather Scale by Robbie Strickland
Lambda at Weather Scale by Robbie Strickland
 
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...
 
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
 

Último

Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitterShivangiSharma879191
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniquesugginaramesh
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncssuser2ae721
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction managementMariconPadriquez1
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 

Último (20)

Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniques
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction management
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 

Intro to Akka Streams

  • 2. Agenda • Reactive Streams • Why Akka Streams? • API Overview
  • 4. public interface Publisher<T> { public void subscribe(Subscriber<? super T> s); } public interface Subscriber<T> { public void onSubscribe(Subscription s); public void onNext(T t); public void onError(Throwable t); public void onComplete(); } public interface Processor<T, R> extends Subscriber<T>, Publisher<R> { } public interface Subscription { public void request(long n); public void cancel(); } Reactive Streams
  • 5. A standardised spec/contract to achieve asynchronous back-pressured stream processing.
  • 6. Standardised ? Gives us consistent interop between libraries and platforms that implement this spec.
  • 7.
  • 8. everything is async & back-pressured
  • 9. Reactive Streams Stream API Stream API Stream API
  • 10. Reactive Streams Stream API Stream API Stream API Users use this API
  • 11. Reactive Streams Stream API Stream API Stream API Users use this API Library authors use this API
  • 13. • We know async IO from last week • But there are other types of async operations, that cross over different async boundaries • between applications • between threads • and over the network as we saw
  • 16. Think abstractly about these lines. “async boundary” This can be the network, or threads on the same CPU. Publisher[T] Subscriber[T]
  • 17. What problem are we trying to solve? Discrepancy in the rate of processing • Fast Publisher / Slow Subscriber • Slow Publisher / Fast Subscriber
  • 19. Publisher[T] Subscriber[T] 100 messages / 1 second 1 message / 1second Fast Slow
  • 22. Publisher[T] Subscriber[T] has to keep track of messages to resend not safe & complicated
  • 28. Publisher[T] Subscriber[T] publisher didn’t receive NACK in time so we lost that last message not safe
  • 30. Publisher[T] Subscriber[T] 100 messages / 1 second 1 message / 1second FastSlow
  • 40. • Spam! • Redundant messaging -> flooding the connection • No buffer/batch support
  • 42. We have to take into account the following scenarios: • Fast Pub / Slow Sub • Slow Pub / Fast Sub Which can happen dynamically
  • 44. Publisher[T] Subscriber[T] Data Demand(n) Dynamic Push/Pull bounded buffers with no overflow demand can be accumulated batch processing -> performance
  • 45. • Cool let’s implement this using Actors! • We can, it’s possible … but should it be done ?
  • 46. The problem(s) with Akka Actors
  • 48. Composition In FP this makes us warm and fuzzy val f: A => B val g: B => C val h: A => C = f andThen g
  • 49. • Using Actors? • An Actor is aware of who sent it messages and where it must forward/reply them. • No compositionality without thinking about it explicitly.
  • 50. Data Flow • What are streams ? Flows of data. • Imagine a 10 stage data pipeline you want to model • Now imagine writing that in Actors.
  • 51.
  • 52. • Following the flow of data in Actors requires jumping around all over the code base • Low level, error prone and hard to reason about
  • 54. Design Philosophy • Everything we will cover now are blueprints that describe the actions/effects they perform. • Reusability • Compositionality
  • 55. • “Design your program with a pure functional core, push side-effects to the end of the world and detonate to execute. - some guy on stackoverflow
  • 56. • Publisher of data • Exactly one output Image from boldradius.com
  • 57. val singleSrc = Source.single(1) val iteratorSrc = Source.fromIterator(() => Iterator from 0) val futureSrc = Source.fromFuture(Future("abc")) val collectionSrc = Source(List(1,2,3)) val tickSrc = Source.tick( initialDelay = 1 second, interval = 1 second, tick = "tick-tock") val requestSource = req.entity.dataBytes
  • 58. • Subscriber (consumer) of data • Describes where the data in our stream will go. • Exactly one input Image from boldradius.com
  • 59. Sink.head Sink.reduce[Int]((a, b) => a + b) Sink.fold[Int, Int](0)(_ + _) Sink.foreach[String](println) FileIO.toPath(Paths.get("file.txt"))
  • 60. val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _)
  • 61. val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _) Input type
  • 62. val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _) Materialized type
  • 63. val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _) Materialized type Available when the stream ‘completes’
  • 64. val fold: Sink[Int, Future[Int]] = Sink.fold[Int, Int](0)(_ + _) val futureRes: Future[Int] = Source(1 to 10).runWith(fold) futureRes.foreach(println) // 55
  • 65. So I can get data from somewhere and I can put data somewhere else. But I want to do something with it.
  • 66. • A processor of data • Has one input and one output Image from boldradius.com
  • 67. val double: Flow[Int, Int, NotUsed] = Flow[Int].map(_ * 2)
  • 68. val src = Source(1 to 10) val double = Flow[Int].map(_ * 2) val negate = Flow[Int].map(_ * -1) val print = Sink.foreach[Int](println) val graph = src via double via negate to print graph.run() -2 -4 -6 -8 -10 -12 -14 -16 -18 -20
  • 69. • Flow is immutable, thread-safe, and thus freely shareable
  • 70. • Are Linear flows enough ? • No, we want to be able to describe arbitrarilly complex steps in our pipelines
  • 72. Flow
  • 73. Graph
  • 74. • We define multiple linear flows and then use the Graph DSL to connect them. • We can combine multiple streams - fan in • Split a stream into substreams - fan out
  • 78. Some sort of video uploading service - Stream in video - Process it - Store it
  • 79. bcast ByteString Convert to Array[Byte] flow bcast Process High Res flow Process Low Res flow Process Med Res flow sink sink sink
  • 80. Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b => (highSink, mediumSink, lowSink) => { import GraphDSL.Implicits._ val bcastInput = b.add(Broadcast[ByteString](1)) val bcastRawBytes = b.add(Broadcast[Array[Byte]](3)) val processHigh: Flow[Array[Byte], ByteString, NotUsed] val processMedium: Flow[Array[Byte], ByteString, NotUsed] val processLow: Flow[Array[Byte], ByteString, NotUsed] bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink bcastRawBytes ~> processMedium ~> mediumSink bcastRawBytes ~> processLow ~> lowSink SinkShape(bcastInput.in) } }) Our custom Sink
  • 81. Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b => (highSink, mediumSink, lowSink) => { import GraphDSL.Implicits._ val bcastInput = b.add(Broadcast[ByteString](1)) val bcastRawBytes = b.add(Broadcast[Array[Byte]](3)) val processHigh: Flow[Array[Byte], ByteString, NotUsed] val processMedium: Flow[Array[Byte], ByteString, NotUsed] val processLow: Flow[Array[Byte], ByteString, NotUsed] bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink bcastRawBytes ~> processMedium ~> mediumSink bcastRawBytes ~> processLow ~> lowSink SinkShape(bcastInput.in) } }) Has one input of type ByteString
  • 82. Takes 3 Sinks, which can be Files, DBs, etc. Has one input of type ByteString Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b => (highSink, mediumSink, lowSink) => { import GraphDSL.Implicits._ val bcastInput = b.add(Broadcast[ByteString](1)) val bcastRawBytes = b.add(Broadcast[Array[Byte]](3)) val processHigh: Flow[Array[Byte], ByteString, NotUsed] val processMedium: Flow[Array[Byte], ByteString, NotUsed] val processLow: Flow[Array[Byte], ByteString, NotUsed] bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink bcastRawBytes ~> processMedium ~> mediumSink bcastRawBytes ~> processLow ~> lowSink SinkShape(bcastInput.in) } })
  • 83. Describes 3 processing stages That are Flows of Array[Byte] => ByteString Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b => (highSink, mediumSink, lowSink) => { import GraphDSL.Implicits._ val bcastInput = b.add(Broadcast[ByteString](1)) val bcastRawBytes = b.add(Broadcast[Array[Byte]](3)) val processHigh: Flow[Array[Byte], ByteString, NotUsed] val processMedium: Flow[Array[Byte], ByteString, NotUsed] val processLow: Flow[Array[Byte], ByteString, NotUsed] bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink bcastRawBytes ~> processMedium ~> mediumSink bcastRawBytes ~> processLow ~> lowSink SinkShape(bcastInput.in) } }) Has one input of type ByteString Takes 3 Sinks, which can be Files, DBs, etc.
  • 84. Describes 3 processing stages That are Flows of Array[Byte] => ByteString Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b => (highSink, mediumSink, lowSink) => { import GraphDSL.Implicits._ val bcastInput = b.add(Broadcast[ByteString](1)) val bcastRawBytes = b.add(Broadcast[Array[Byte]](3)) val processHigh: Flow[Array[Byte], ByteString, NotUsed] val processMedium: Flow[Array[Byte], ByteString, NotUsed] val processLow: Flow[Array[Byte], ByteString, NotUsed] bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink bcastRawBytes ~> processMedium ~> mediumSink bcastRawBytes ~> processLow ~> lowSink SinkShape(bcastInput.in) } }) Has one input of type ByteString Emits result to the 3 Sinks Takes 3 Sinks, which can be Files, DBs, etc.
  • 85. Has a type of: Sink[ByteString, (Future[IOResult], Future[IOResult], Future[IOResult])] Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b => (highSink, mediumSink, lowSink) => { import GraphDSL.Implicits._ val bcastInput = b.add(Broadcast[ByteString](1)) val bcastRawBytes = b.add(Broadcast[Array[Byte]](3)) val processHigh: Flow[Array[Byte], ByteString, NotUsed] val processMedium: Flow[Array[Byte], ByteString, NotUsed] val processLow: Flow[Array[Byte], ByteString, NotUsed] bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink bcastRawBytes ~> processMedium ~> mediumSink bcastRawBytes ~> processLow ~> lowSink SinkShape(bcastInput.in) } })
  • 86. Sink[ByteString, (Future[IOResult], Future[IOResult], Future[IOResult])] Materialized values Sink.fromGraph(GraphDSL.create(highRes, mediumRes, lowRes)((_, _, _){ implicit b => (highSink, mediumSink, lowSink) => { import GraphDSL.Implicits._ val bcastInput = b.add(Broadcast[ByteString](1)) val bcastRawBytes = b.add(Broadcast[Array[Byte]](3)) val processHigh: Flow[Array[Byte], ByteString, NotUsed] val processMedium: Flow[Array[Byte], ByteString, NotUsed] val processLow: Flow[Array[Byte], ByteString, NotUsed] bcastInput.out(0) ~> byteAcc ~> bcastRawBytes ~> processHigh ~> highSink bcastRawBytes ~> processMedium ~> mediumSink bcastRawBytes ~> processLow ~> lowSink SinkShape(bcastInput.in) } })
  • 87. Things we didn’t have time for • Integrating with Actors • Buffering and throttling streams • Defining custom Graph shapes and stages