We now live in a world with data at its heart. The amount of data being produced every day is growing exponentially and a large amount of this data is in the form of events. Whether it be updates from sensors, clicks on a website or even tweets, applications are bombarded with a never-ending stream of new events. So, how can we architect our applications to be more reactive and resilient to these fluctuating loads and better manage our thirst for data? In this session explore how Kafka and Reactive application architecture can be combined in applications to better handle our modern data needs.
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Clement’s session C3 – 4pm
It is possible to do http requests without blocking the thread, but even with that switch you are still approaching from a request/response perspective
Event Driven Architecture (EDA) is a popular architectural approach that enables events to be placed at the heart of our systems
Consists of Events
Events are records of something that has happened, a change in state - immutable and are ordered in sequence of their creation.
Interested parties can be notified of these state changes by subscribing to published events and then acting on information using their chosen business logic.
An event-driven architecture, refers to a system of loosely coupled microservices that exchange information between each other through the production and consumption of events.
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
No! This isn’t reasonable!
Kafka is a good tool, but it isn’t enough to have a good tool, you need to use it in the right way
You also need to think about your applications and other services, Kafka isn’t your whole architecture – integration between components is key!
Can we just use Kafka to create a Reactive application?Short answer: NOWhile Kafka look after the messaging part, we still need a Reactive Microservice implementation, for instance, using the actor model to replace thread synchronization with queued message processing or the supervisor model to handle failures and self-healing. We definitely need both Akka and Kafka to build Reactive Microservices based responsive, resilient and elastic systems.
No! This isn’t reasonable!
Kafka is a good tool, but it isn’t enough to have a good tool, you need to use it in the right way
You also need to think about your applications and other services, Kafka isn’t your whole architecture – integration between components is key!
Can we just use Kafka to create a Reactive application?Short answer: NOWhile Kafka look after the messaging part, we still need a Reactive Microservice implementation, for instance, using the actor model to replace thread synchronization with queued message processing or the supervisor model to handle failures and self-healing. We definitely need both Akka and Kafka to build Reactive Microservices based responsive, resilient and elastic systems.
No! This isn’t reasonable!
Kafka is a good tool, but it isn’t enough to have a good tool, you need to use it in the right way
You also need to think about your applications and other services, Kafka isn’t your whole architecture – integration between components is key!
Can we just use Kafka to create a Reactive application?Short answer: NOWhile Kafka look after the messaging part, we still need a Reactive Microservice implementation, for instance, using the actor model to replace thread synchronization with queued message processing or the supervisor model to handle failures and self-healing. We definitely need both Akka and Kafka to build Reactive Microservices based responsive, resilient and elastic systems.
No! This isn’t reasonable!
Kafka is a good tool, but it isn’t enough to have a good tool, you need to use it in the right way
You also need to think about your applications and other services, Kafka isn’t your whole architecture – integration between components is key!
Can we just use Kafka to create a Reactive application?Short answer: NOWhile Kafka look after the messaging part, we still need a Reactive Microservice implementation, for instance, using the actor model to replace thread synchronization with queued message processing or the supervisor model to handle failures and self-healing. We definitely need both Akka and Kafka to build Reactive Microservices based responsive, resilient and elastic systems.
So Kafka claims to have scalable consumption and resiliency, do I just get that for free when I start Kafka? How does it work?
Talking about message driven vs event driven
Talking about message driven vs event driven
Open sourced distributed streaming platform, often being adopted as the “de-facto” event streaming technology
Arrived at the right time, captured mindshare among developers and so exploded in popularity
Kafka has deliberately moved away from the word “events”… instead uses records now
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Talking about message driven vs event driven
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Use this as an example -> there are plenty of existing demos showing that using event driven vs e.g. http is much better
Talking about message driven vs event driven
Talking about message driven vs event driven
Talking about message driven vs event driven
Kafka = gives us reactive data layer GReactive architecture patterns = give us reactivity in the architecture of the systemReactive programming = gives us reactivity within the microservicesA reactive system is an architectural style that allows multiple individual applications to coalesce as a single unit, reacting to its surroundings, while remaining aware of each other—this could manifest as being able to scale up/down, load balancing, and even taking some of these steps proactively.
It’s possible to write a single application in a reactive style (i.e. using reactive programming); however, that’s merely one piece of the puzzle. Though each of the above aspects may seem to qualify as “reactive,” in and of themselves they do not make a system reactive.
(Designed well together)
Kafka = gives us reactive data layer GReactive architecture patterns = give us reactivity in the architecture of the systemReactive programming = gives us reactivity within the microservicesA reactive system is an architectural style that allows multiple individual applications to coalesce as a single unit, reacting to its surroundings, while remaining aware of each other—this could manifest as being able to scale up/down, load balancing, and even taking some of these steps proactively.
It’s possible to write a single application in a reactive style (i.e. using reactive programming); however, that’s merely one piece of the puzzle. Though each of the above aspects may seem to qualify as “reactive,” in and of themselves they do not make a system reactive.
(Designed well together)
Reactive adopts a set of design patterns such as: - CQRS – separates the reads and writes - Event Sourcing - persists the state of a business entity such an Order or a Customer as a sequence of state-changing events. Whenever the state of a business entity changes, a new event is appended to the list of events. - SAGA - a mechanism to take traditional transactions that we would have done in a monolithic architecture and do it in a distributed way. We create multi “micro” transactions that have fallback behaviour to account for things going wrong part way through.
It’s a sequence of local transactions where each transaction updates data within a single service
- Sharding - distributes and replicates the data across a pool of databases that do not share hardware or software. Each individual database is known as a shard. Java applications can linearly scale up or scale down by adding databases (shard) to the pool or by removing databases (shards) from the pool.These patterns trade off eventual consistency, availability and scalability for strong consistency (CAP Theorem).Kafka is a perfect fit for those design patterns.
Kafka = gives us reactive data layer GReactive architecture patterns = give us reactivity in the architecture of the systemReactive programming = gives us reactivity within the microservicesA reactive system is an architectural style that allows multiple individual applications to coalesce as a single unit, reacting to its surroundings, while remaining aware of each other—this could manifest as being able to scale up/down, load balancing, and even taking some of these steps proactively.
It’s possible to write a single application in a reactive style (i.e. using reactive programming); however, that’s merely one piece of the puzzle. Though each of the above aspects may seem to qualify as “reactive,” in and of themselves they do not make a system reactive.
(Designed well together)
Asynchronous code allows independent IO operations to run concurrently, resulting in efficient code. However, this improved efficiency comes at a cost — straightforward synchronous code may become a mess of nested callbacks.
Futures - Enables us to combine the simplicity of synchronous code with the efficiency of the asynchronous approach. Future represents the result of an asynchronous computation. Methods are provided to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation.
A Publisher is the source of events T in the stream, and a Subscriber is a consumer for those events. A Subscriber subscribes to a Publisher by invoking a “factory method” in the Publisher that will push the stream items <T> starting a new Subscription. This is also called Reactor Pattern.
Asynchronous code allows independent IO operations to run concurrently, resulting in efficient code. However, this improved efficiency comes at a cost — straightforward synchronous code may become a mess of nested callbacks.
Futures - Enables us to combine the simplicity of synchronous code with the efficiency of the asynchronous approach. Future represents the result of an asynchronous computation. Methods are provided to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation.
A Publisher is the source of events T in the stream, and a Subscriber is a consumer for those events. A Subscriber subscribes to a Publisher by invoking a “factory method” in the Publisher that will push the stream items <T> starting a new Subscription. This is also called Reactor Pattern.
So Kafka claims to have scalable consumption and resiliency, do I just get that for free when I start Kafka? How does it work?
Talking about message driven vs event driven
A Kafka cluster consists of a set of brokers.
A cluster has a minimum of 3 brokers.
Kafka broken down into topics
Records on a topic split into different partitions
Partitions distributed across Kafka brokers
For each partition, one of the brokers is the leader, and the other brokers are the followers.
Replication works by the followers repeatedly fetching messages from the leader. This is done automatically by Kafka.
For production we recommend at least 3 replicas: you’ll see why in a minute.
Imagine a broker goes down, this means the leader of Topic A, partition 1 is offline
Can’t do fire and forget if you want full resiliency cause if the broker goes down your messages get lost
Two different guarantees, way you get them is through confirguartion
At most once, you may lose some messages (not completely relisient)
At least once, guaranteed delivery but may get duplicates
Retries is if acks times out/fails – how many times do you retry producing the event (how will the retry affect ordering)
Talking about message driven vs event driven
Elasticity in Kafka itself
Scale out brokers, can’t scale down (where do events go if you did?)
Can scale out partitions but can’t scale them down again
Can add topics, and delete topics if you don’t care about them
To allow scalability of consumers, consumers are grouped into consumer groups. Consumer declare what group they are in using a group id
For consumers we use Consumer groups to enable elasticity
Elasticity in Kafka itself
Scale out brokers, can’t scale down (where do events go if you did?)
Can scale out partitions but can’t scale them down again
Can add topics, and delete topics if you don’t care about them
Elasticity in Kafka itself
Scale out brokers, can’t scale down (where do events go if you did?)
Can scale out partitions but can’t scale them down again
Can add topics, and delete topics if you don’t care about them
Elasticity in Kafka itself
Scale out brokers, can’t scale down (where do events go if you did?)
Can scale out partitions but can’t scale them down again
Can add topics, and delete topics if you don’t care about them
Elasticity in Kafka itself
Scale out brokers, can’t scale down (where do events go if you did?)
Can scale out partitions but can’t scale them down again
Can add topics, and delete topics if you don’t care about them
All of these frameworks take the role of handling polling, then provide different ways for us to handle the processing and flow. MicroProfile and Alpakka provide a more constrained way of doing backpressure and flow control, whereas vertx provides the tools for doing that separation but leaves it up to the developer to decide exactly how to implement it.
We will describe flow control for MP and alpakka, “how they improve flow control and allow application to follow reactive streams spec”
The actor model is a conceptual model to deal with concurrent computation.
An actor is the primitive unit of computation. It’s the thing that receives a message and do some kind of computation based on it.
Messages are sent asynchronously to an actor, that needs to store them somewhere while it’s processing another message. The mailbox is the place where these messages are stored.
Actors communicate with each other by sending asynchronous messages. Those messages are stored in other actors' mailboxes until they're processed.
It allows consuming/producing from Kafka with Akka Streams, leveraging the reactive interface of this streaming library, its backpressure, and resource safety. It hides a lot of complexity, especially when your streaming logic is non-trivial like sub-streaming per partition and handling commits in custom ways.
The MicroProfile Reactive Messaging specifiMicroProfile Reactive Messaging makes use of and interoperates with two other specifications:
Reactive Streams is a specification for doing asynchronous stream processing with back pressure. It defines a minimal set of interfaces to allow components which do this sort of stream processing to be connected together.
MicroProfile Reactive Streams Operators is a MicroProfile specification which builds on Reactive Streams to provide a set of basic operators to link different reactive components together and to perform processing on the data which passes between them.
When you use the MicroProfile Reactive Messaging @Incoming and @Outgoing annotations, Open Liberty creates a Reactive Streams component for each method and joins them up by matching the channel names.
cation aims to deliver applications embracing the characteristics of reactive systems
A method with an @Incoming annotation consumes messages from a channel.
A method with an @Outgoing annotation publishes messages to a channel.
A method with both an @Incoming and an @Outgoing annotation is a message processor, it consumes messages from a channel, does some transformation to them, and publishes messages to another channel.
When you use the MicroProfile Reactive Messaging @Incoming and @Outgoing annotations, Open Liberty creates a Reactive Streams component for each method and joins them up by matching the channel names.
Polyglot Java, Javascript, Groovy, Ceylon, Scala and Kotlin
The reactor pattern is one implementation technique of event-driven architecture. In simple terms, it uses a single threaded event loop blocking on resource-emitting events and dispatches them to corresponding handlers and callbacks.
It receives messages, requests, and connections coming from multiple concurrent clients and processes these posts sequentially using event handlers. The purpose of the Reactor design pattern is to avoid the common problem of creating a thread for each message, request, and connection. Then it receives events from a set of handlers and distributes them sequentially to the corresponding event handlers.
It’s single-threaded – so you must not block the thread!
The Kafka client is becoming more popular and e.g. it is used by SmallRye Reactive messaging
So Kafka claims to have scalable consumption and resiliency, do I just get that for free when I start Kafka? How does it work?
Demo the starter app working
Key takeaways:
Choosing a reactive framework makes it easier to work with Kafka
Strimzi, cool open source project that provides a Kubernetes operator for Kafka, just been accepted into CNCF (Cloud Native Computer Foundation)
Kate active contributer to Strimzi and I was interested in Vert.x
Demo the starter app working
Key takeaways:
Choosing a reactive framework makes it easier to work with Kafka
Strimzi, cool open source project that provides a Kubernetes operator for Kafka, just been accepted into CNCF (Cloud Native Computer Foundation)
Kate active contributer to Strimzi and I was interested in Vert.x
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
Show/talk about the normal way to use the Kafka clients
IBM Event Streams is fully supported Apache Kafka® with value-add capabilities