We now live in a world with data at its heart. The amount of data being produced every day is growing exponentially and a large amount of this data is in the form of events. Whether it be updates from sensors, clicks on a website or even tweets, applications are bombarded with a never-ending stream of new events. So, how can we architect our applications to be more reactive and resilient to these fluctuating loads and better manage our thirst for data? In this session explore how Kafka and Reactive application architecture can be combined in applications to better handle our modern data needs.
Start with our “journey” to microservices
How do you architect your microservices so that your clients get a nice experience
Talk about response time, data-driven level first, but we live in an event-driven world!
Start with demo showing http vs Kafka (video)
Show that Kafka is much quicker
Why? – delve into code? Show that we are being event-driven, what does that mean, why is it quicker (timeout diagrams)
Talk about data centric vs event-centric
But this is only looking at the architecture, what is happening inside your microservices
Leads into reactive intro
Why reactive architecture exists, how it fits into Kafka, what are the cornerstones
What happens if we set up Kafka in a non-reactive way?
Ok let’s fix it so it is reactive, and now switch to a reactive app.
At the end, running Kafka in reactive way and implementing with vertx, includes showing the vertx Kafka client etc.
Run app in a container?
Options for Kafka on Kube
End resources
Non-resilient, or non-elastic we could have failure at some point
Non-resilient – only replicating on one broker
Non-elastic – how does vertx do elasticity?
First app is a basic Kafka client app, then later introduce vertx
Every second,
~ 6,000 tweets are tweeted
>40,000 Google queries are searched
>2 million emails are sent
Photo uploads total 300 million per day.
Emphasizing how much data applications are expected to handle
Also impact in terms of fluctuation e.g. black Friday
Also people wanting to have split second responsiveness
Banking apps -> needing up to date information
(Internet Live Stats, a website of the international Real Time Statistics Project)
Event Driven Architecture (EDA) is a popular architectural approach that enables events to be placed at the heart of our systems
Consists of Events
Events are records of something that has happened, a change in state - immutable and are ordered in sequence of their creation.
Interested parties can be notified of these state changes by subscribing to published events and then acting on information using their chosen business logic.
An event-driven architecture, refers to a system of loosely coupled microservices that exchange information between each other through the production and consumption of events.
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Clement 4pm C3
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Clement’s session C3 – 4pm
It is possible to do http requests without blocking the thread, but even with that switch you are still approaching from a request/response perspective
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Clement’s session C3 – 4pm
It is possible to do http requests without blocking the thread, but even with that switch you are still approaching from a request/response perspective
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Clement’s session C3 – 4pm
It is possible to do http requests without blocking the thread, but even with that switch you are still approaching from a request/response perspective
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
No! This isn’t reasonable!
Kafka is a good tool, but it isn’t enough to have a good tool, you need to use it in the right way
You also need to think about your applications and other services, Kafka isn’t your whole architecture – integration between components is key!
Can we just use Kafka to create a Reactive application?Short answer: NOWhile Kafka look after the messaging part, we still need a Reactive Microservice implementation, for instance, using the actor model to replace thread synchronization with queued message processing or the supervisor model to handle failures and self-healing. We definitely need both Akka and Kafka to build Reactive Microservices based responsive, resilient and elastic systems.
No! This isn’t reasonable!
Kafka is a good tool, but it isn’t enough to have a good tool, you need to use it in the right way
You also need to think about your applications and other services, Kafka isn’t your whole architecture – integration between components is key!
Can we just use Kafka to create a Reactive application?Short answer: NOWhile Kafka look after the messaging part, we still need a Reactive Microservice implementation, for instance, using the actor model to replace thread synchronization with queued message processing or the supervisor model to handle failures and self-healing. We definitely need both Akka and Kafka to build Reactive Microservices based responsive, resilient and elastic systems.
No! This isn’t reasonable!
Kafka is a good tool, but it isn’t enough to have a good tool, you need to use it in the right way
You also need to think about your applications and other services, Kafka isn’t your whole architecture – integration between components is key!
Can we just use Kafka to create a Reactive application?Short answer: NOWhile Kafka look after the messaging part, we still need a Reactive Microservice implementation, for instance, using the actor model to replace thread synchronization with queued message processing or the supervisor model to handle failures and self-healing. We definitely need both Akka and Kafka to build Reactive Microservices based responsive, resilient and elastic systems.
Kafka = gives us reactive data layer GReactive architecture patterns = give us reactivity in the architecture of the systemReactive programming = gives us reactivity within the microservices(Designed well together)
Kafka = gives us reactive data layer GReactive architecture patterns = give us reactivity in the architecture of the systemReactive programming = gives us reactivity within the microservices(Designed well together)
Kafka = gives us reactive data layer GReactive architecture patterns = give us reactivity in the architecture of the systemReactive programming = gives us reactivity within the microservices(Designed well together)
Kafka = gives us reactive data layer GReactive architecture patterns = give us reactivity in the architecture of the systemReactive programming = gives us reactivity within the microservicesA reactive system is an architectural style that allows multiple individual applications to coalesce as a single unit, reacting to its surroundings, while remaining aware of each other—this could manifest as being able to scale up/down, load balancing, and even taking some of these steps proactively.
It’s possible to write a single application in a reactive style (i.e. using reactive programming); however, that’s merely one piece of the puzzle. Though each of the above aspects may seem to qualify as “reactive,” in and of themselves they do not make a system reactive.
(Designed well together)
Asynchronous code allows independent IO operations to run concurrently, resulting in efficient code. However, this improved efficiency comes at a cost — straightforward synchronous code may become a mess of nested callbacks.
Futures - Enables us to combine the simplicity of synchronous code with the efficiency of the asynchronous approach. Future represents the result of an asynchronous computation. Methods are provided to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation.
A Publisher is the source of events T in the stream, and a Subscriber is a consumer for those events. A Subscriber subscribes to a Publisher by invoking a “factory method” in the Publisher that will push the stream items <T> starting a new Subscription. This is also called Reactor Pattern.
Asynchronous code allows independent IO operations to run concurrently, resulting in efficient code. However, this improved efficiency comes at a cost — straightforward synchronous code may become a mess of nested callbacks.
Futures - Enables us to combine the simplicity of synchronous code with the efficiency of the asynchronous approach. Future represents the result of an asynchronous computation. Methods are provided to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation.
A Publisher is the source of events T in the stream, and a Subscriber is a consumer for those events. A Subscriber subscribes to a Publisher by invoking a “factory method” in the Publisher that will push the stream items <T> starting a new Subscription. This is also called Reactor Pattern.
Reactive Manifest 2.0
Reactive architecture is an architecture approach aims to use asynchronous messaging or event driven architecture to build Responsive, Resilient and Elastic systems.
Reactive Microservices is capitalizing on the Reactive approach while supporting faster time to market using Microservices.
Reactive Microservices is using asynchronous messaging to minimize or isolate the negative effects of resource contention, coherency delays and inter-service communication network latency.
By using an event driven architecture we can have both agile development and build responsive systems.
Reactive Manifest 2.0
Reactive architecture is an architecture approach aims to use asynchronous messaging or event driven architecture to build Responsive, Resilient and Elastic systems.
Reactive Microservices is capitalizing on the Reactive approach while supporting faster time to market using Microservices.
Reactive Microservices is using asynchronous messaging to minimize or isolate the negative effects of resource contention, coherency delays and inter-service communication network latency.
By using an event driven architecture we can have both agile development and build responsive systems.
Reactive Manifest 2.0
Reactive architecture is an architecture approach aims to use asynchronous messaging or event driven architecture to build Responsive, Resilient and Elastic systems.
Reactive Microservices is capitalizing on the Reactive approach while supporting faster time to market using Microservices.
Reactive Microservices is using asynchronous messaging to minimize or isolate the negative effects of resource contention, coherency delays and inter-service communication network latency.
By using an event driven architecture we can have both agile development and build responsive systems.
Reactive adopts a set of design patterns such as: - CQRS – separates the reads and writes - Event Sourcing - persists the state of a business entity as a sequence of state-changing events. Whenever the state of a business entity changes, a new event is appended to the list of events. - SAGA - a mechanism to take traditional transactions that we would have done in a monolithic architecture and do it in a distributed way. We create multi “micro” transactions that have fallback behaviour to account for things going wrong part way through. It’s a sequence of local transactions where each transaction updates data within a single service - Sharding - distributes and replicates the data across a pool of databases that do not share hardware or software. Each individual database is known as a shard. Java applications can linearly scale up or scale down by adding databases (shard) to the pool or by removing databases (shards) from the pool.These patterns trade off eventual consistency, availability and scalability for strong consistency (CAP Theorem).Kafka is a perfect fit for those design patterns.
So Kafka claims to have scalable consumption and resiliency, do I just get that for free when I start Kafka? How does it work?
Talking about message driven vs event driven
Ultimately, founders of Reactive manifesto believed that by switching from Event-Driven to Message-Driven, they could more accurately articulate and define the other traits.
The difference being that messages are directed, events are not—a message has a clear addressable recipient while an event just happen for others (0-N) to observe it.
Open sourced distributed streaming platform, often being adopted as the “de-facto” event streaming technology
Arrived at the right time, captured mindshare among developers and so exploded in popularity
Kafka has deliberately moved away from the word “events”… instead uses records now
Reactive architecture is an architecture approach aims to use asynchronous messaging or event driven architecture to build Responsive, Resilient and Elastic systems.
Reactive Microservices is capitalizing on the Reactive approach while supporting faster time to market using Microservices.
Reactive Microservices is using asynchronous messaging to minimize or isolate the negative effects of resource contention, coherency delays and inter-service communication network latency.
By using an event driven architecture we can have both agile development and build responsive systems.
A Kafka cluster consists of a set of brokers.
A cluster has a minimum of 3 brokers.
Kafka broken down into topics
Records on a topic split into different partitions
Partitions distributed across Kafka brokers
For each partition, one of the brokers is the leader, and the other brokers are the followers.
Replication works by the followers repeatedly fetching messages from the leader. This is done automatically by Kafka.
For production we recommend at least 3 replicas: you’ll see why in a minute.
Imagine a broker goes down, this means the leader of Topic A, partition 1 is offline
Can’t do fire and forget if you want full resiliency cause if the broker goes down your messages get lost
Two different guarantees, way you get them is through confirguartion
At most once, you may lose some messages (not completely relisient)
At least once, guaranteed delivery but may get duplicates
Retries is if acks times out/fails – how many times do you retry producing the event (how will the retry affect ordering)
Replacing scalable with elastic….Truly Reactive Systems should react to changes in the input rate by increasing or decreasing the resources allocated to service these inputs, not just expanded according to its usage (which is the definition of Scalable).
Three different things to consider – Kafka itself, the consumers and the producers
Elasticity in Kafka itself
Scale out brokers, can’t scale down (where do events go if you did?)
Can scale out partitions but can’t scale them down again
Can add topics, and delete topics if you don’t care about them
To allow scalability of consumers, consumers are grouped into consumer groups. Consumer declare what group they are in using a group id
For consumers we use Consumer groups to enable elasticity
If you added an extra consumer to consumer group A it would sit idle, since there aren’t any spare partitions – this isn’t ideal but could be useful if you want it to quickly pick up the slack if one of the other consumers went down
Key message – you can scale up and scale down consumers – CAVEAT! You can only scale up consumers to match the number of partitions
So black Friday, make sure you have enough partitions!
If you scale up consumers to more then partitions, you’ll have some sitting idle, only use of this is if a consumer goes down then you have a backup
If you added an extra consumer to consumer group A it would sit idle, since there aren’t any spare partitions – this isn’t ideal but could be useful if you want it to quickly pick up the slack if one of the other consumers went down
If you added an extra consumer to consumer group A it would sit idle, since there aren’t any spare partitions – this isn’t ideal but could be useful if you want it to quickly pick up the slack if one of the other consumers went down
If you added an extra consumer to consumer group A it would sit idle, since there aren’t any spare partitions – this isn’t ideal but could be useful if you want it to quickly pick up the slack if one of the other consumers went down
Applications using Kafka as a message bus using this API may consider switching to Reactor Kafka if the application is implemented in a functional style.
Based on top of Project Reactor
Uses Kafka Java client (Kafka Producer/Consumer API) under the hood
The actor model is a conceptual model to deal with concurrent computation.
An actor is the primitive unit of computation. It’s the thing that receives a message and do some kind of computation based on it.
Messages are sent asynchronously to an actor, that needs to store them somewhere while it’s processing another message. The mailbox is the place where these messages are stored.
Actors communicate with each other by sending asynchronous messages. Those messages are stored in other actors' mailboxes until they're processed.
It allows consuming/producing from Kafka with Akka Streams, leveraging the reactive interface of this streaming library, its backpressure, and resource safety. It hides a lot of complexity, especially when your streaming logic is non-trivial like sub-streaming per partition and handling commits in custom ways.
Polyglot Java, Javascript, Groovy, Ceylon, Scala and Kotlin
The reactor pattern is one implementation technique of event-driven architecture. In simple terms, it uses a single threaded event loop blocking on resource-emitting events and dispatches them to corresponding handlers and callbacks.
It receives messages, requests, and connections coming from multiple concurrent clients and processes these posts sequentially using event handlers. The purpose of the Reactor design pattern is to avoid the common problem of creating a thread for each message, request, and connection. Then it receives events from a set of handlers and distributes them sequentially to the corresponding event handlers.
It’s single-threaded – so you must not block the thread!
The Kafka client is becoming more popular and e.g. it is used by SmallRye Reactive messaging
Reactive adopts a set of design patterns such as: - CQRS - event sourcing - command sourcing - shardingThese patterns trade off eventual consistency, availability and scalability for strong consistency (CAP Theorem).Kafka is a perfect fit for those design patterns.
Demo the starter app working
Key takeaways:
Choosing a reactive framework makes it easier to work with Kafka
Strimzi, cool open source project that provides a Kubernetes operator for Kafka, just been accepted into CNCF (Cloud Native Computer Foundation)
Kate active contributer to Strimzi and I was interested in Vert.x
Demo the starter app working
Key takeaways:
Choosing a reactive framework makes it easier to work with Kafka
Strimzi, cool open source project that provides a Kubernetes operator for Kafka, just been accepted into CNCF (Cloud Native Computer Foundation)
Kate active contributer to Strimzi and I was interested in Vert.x
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
Eclipse MicroProfile is an open-source community specification for Enterprise Java microservices
A community of individuals, organizations, and vendors collaborating within an open source (Eclipse) project to bring microservices to the Enterprise Java community
The role of the MicroProfile Reactive Messaging specification is to deliver a way to build systems of microservices promoting both location transparency and temporal decoupling, enforcing asynchronous communication between the different parts of the system
The MicroProfile Reactive Messaging specification aims to deliver applications embracing the characteristics of reactive systems
The MicroProfile Reactive Messaging specifiMicroProfile Reactive Messaging makes use of and interoperates with two other specifications:
Reactive Streams is a specification for doing asynchronous stream processing with back pressure. It defines a minimal set of interfaces to allow components which do this sort of stream processing to be connected together.
MicroProfile Reactive Streams Operators is a MicroProfile specification which builds on Reactive Streams to provide a set of basic operators to link different reactive components together and to perform processing on the data which passes between them.
When you use the MicroProfile Reactive Messaging @Incoming and @Outgoing annotations, Open Liberty creates a Reactive Streams component for each method and joins them up by matching the channel names.
cation aims to deliver applications embracing the characteristics of reactive systems
CDI beans are classes that CDI can instantiate, manage, and inject automatically to satisfy the dependencies of other objects. Almost any Java class can be managed and injected by CDI.
A method with an @Incoming annotation consumes messages from a channel.
A method with an @Outgoing annotation publishes messages to a channel.
A method with both an @Incoming and an @Outgoing annotation is a message processor, it consumes messages from a channel, does some transformation to them, and publishes messages to another channel.
When you use the MicroProfile Reactive Messaging @Incoming and @Outgoing annotations, Open Liberty creates a Reactive Streams component for each method and joins them up by matching the channel names.
Internal channels are local to the application. They allows implementing multi-step processingwhere several beans from the same application form a chain of processing
Reactive Messaging Connectors: They are responsible for mapping a specific channel to remote sink or sourceof messages. This mapping is configured in the application configuration.
You can create your own connectors - The Reactive Messaging specification provides an SPI to implement connectors.
SPI = Serial Peripheral Interface
IBM Event Streams is fully supported Apache Kafka® with value-add capabilities