3. Example: Loan Application Process
Software-using
1 3 5
4 6
2
BORROWER
CREDIT
OFFICER
LOAN
OFFICER
RISK
OFFICER
APPLICATION
FORM
APPROVE
DENY
Software-defined
1
BORROWER LOAN APP UI
3
APPROVE
DENY
$
CREDIT
SERVICE
RISK
SERVICE
!
CRM
SERVICE
2
Seconds
3
1-2 Weeks
4. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Unlocking Value from Data, in Real-time,
is Imperative to Transformation
4
Data Data leveraged
Interactions
Security events
App data
Workflows
Search data
Purchases
IoT and so much more
Data
Time
Customers
Employees
5. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Enterprises are Undertaking Multiple Initiatives
to Enable Transformation
5
Microservices Cloud Machine
Learning
Automation
6. The Problem: your use of data has changed
but the supporting infrastructure has not!
8. Paradigm for Data Movement:
Messaging Middleware
Producer of
messages
Producer of
messages
Message Oriented
Middleware
Consumer of
messages
Consumer of
messages
10. Traditional Hub & Spoke Message Broker
Message brokers were originally architected as
centralised systems
10
Producer
Message
Broker
Consumer
Single point of failure,
resulting in low fault-
tolerance and high
downtime
Producer Producer
Consumer
Consumer
11. High Availability Pairs of Brokers
For improved fault-tolerance, brokers are often
deployed in high availability pairs
11
Client
Primary
Broker
Client Client
Client Client Client
Standby
Broker
Clients still only
connect to the active
brokers though, so
this solution lacks
scalability...
12. Interconnected group of Message Brokers
Over time, some brokers evolved to multi-node
networks, but clients still connect to one broker
12
Client
Broker 1
Client
Client Client
Because clients still only
connect to one broker,
this solution still does not
provide horizontal
scalability
Managed independently
Client
Broker 2
Client
Client Client
Client
Broker 3
Client
Client Client
14. Let’s use an immutable log to share data!
14
1 2 3 4 5 6 7 8 9 10
Producers
write here
Kafka producers write to an
append-only, immutable, ordered
sequence of messages, which is
always ordered by time
● Sequential writes only
● No random disk access
● All operations are O(1)
● Highly efficient
15. A log is like a queue, but re-readable :-D
15
1 2 3 4 5 6 7 8 9 10
“Consumers”
scan the log
“Consumer”
A
“Consumer”
B
“Better than a queue”-like
behavior as Kafka consumer
groups allows for parallel in-order
consumption of data, which is
something that shared queues in
traditional message brokers do
not support.
● Sequential reads only
● Start at any offset
● All operations are O(1)
● Highly efficient
Slow consumers don’t back up
the broker: THE STREAM GOES
ON.
16. Clients connect to
multiple brokers for
both reads and writes
to and from a topic
Kafka Cluster with Topic Partitions & Multiple Client Connections
Kafka takes a different approach by partitioning
topics across the brokers in a cluster
16
Broker 1 Broker 2 Broker 3
Topic
Partition 0
Topic
Partition 1
Topic
Partition 2
Producer
Consumer
17. Kafka topics are designed as a commit log that
captures events in a durable, scalable way
1 2 3 4 5 6 8 9
7
Partition 1
Old New
1 2 3 4 5 6 8
7
Partition 0 10
9 11 12
Partition 2 1 2 3 4 5 6 8
7 10
9 11 12
Writes
1 2 3 4 5 6 8
7 10
9 11 12
Producers
Writes
“Consumer” A
(offset=4)
“Consumer” B
(offset=7)
Reads
18. Partitioning topics enables greater horizontal
scalability and enterprise-scale throughput
18
15x improvement
in throughput
performance
One platform
to deploy,
secure, and
manage to
support all of
your streaming
workloads.
Broker 1 Broker 2
Topic 1,
Partition 0
Topic 2,
Partition 2
Topic 3,
Partition 1
Topic 4,
Partition 0
Topic 1,
Partition 1
Topic 2,
Partition 0
Topic 3,
Partition 2
Topic 4,
Partition 1
Topic 1,
Partition 2
Topic 2,
Partition 1
Topic 3,
Partition 0
Topic 4,
Partition 2
Broker 3
19. How else is Kafka different from traditional
messaging queues?
19
Topic partitions are
replicated to maximize
fault-tolerance
In addition to partitioning
topics, each partition can be
replicated across multiple
brokers to ensure high uptime
even if a broker is lost.
Producers and consumers
scale independently from
brokers
Production and consumption
rates (e.g. spike or slow
consumer issue) have no effect
on the broker. THE STREAM
GOES ON.
Event streams can be
enriched in real-time with
stream processing
ksqlDB and Kafka Streams
enable event streams to be
processed “in-flight” rather
than with a separate batch
solution
20. Message System Benefits Message System Drawbacks
Real-time (low latency)
Broad Adoption (familiar technology)
Lacks Common Structure
No Stream Processing
No Persistence After Consumption
Low Fault Tolerance at Scale
Slow Consumers Drag Performance
Complexity and Technical Debt
21. Message System Benefits Message System Drawbacks
Real-time (low latency)
Broad Adoption (familiar technology)
Lacks Common Structure
No Stream Processing
No Persistence After Consumption
Low fault Tolerance at Scale
Slow Consumers Drag Performance
Complexity and Technical Debt
Confluent
Real-time
Broad Adoption Stream Processing
Durable & Persistent
Elastically Scalable
Reliable
Schema Registry
23. Confidential and Proprietary.
Confluent: A New Paradigm for Data-in-Motion
23
Rich front-end
customer experiences
Real-time
Data
Real-time
Stream Processing
Real-time backend
operations
QUERY
A Sale
A shipment
A Trade
A Customer
Experience
24. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Implication: Central Nervous System
for Enterprise
Real-time
Inventory
Real-time
Fraud
Detection
Real-time
Customer 360
Machine
Learning
Models
Real-time
Data
Transformation
...
Stream Processing Applications
Data-in-Motion Pipeline
... ... ...
...
Data Stores Logs 3rd Party Apps Custom Apps/Microservices
SaaS
Apps
25. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Datastores
Web / Mobile
PoS Systems
SaaS
Applications
IoT Sensors
Legacy Apps
and Systems
Machine data
ML Engines
Ingest and
aggregate events
from everywhere
Join, enrich, transform, create
materialized views and
analyze data in real-time
Store and persist events in a
highly available and scalable
platform
BI Tools
Standardize schemas to
ensure data compatibility
to all downstream apps
SIEM and
Observability
tools
Data lakes &
warehouses
Real-time alerts
and dashboards
Convert raw events into clean,
usable data
Unlock value
Bring all your data together
in real-time
Integrate with 100+
pre-built connectors
Applications
How Confluent Works
25
26. Copyright 2020, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Confluent Modernizes Enterprise Messaging
Infrastructure
Enable real-time
workloads and
applications
Achieve any
scale
Build a modern
architecture
Future-proof data
architecture and achieve
high performance and
availability at scale
Migrate with
ease
Easily augment or
migrate apps to
Confluent
Reduce TCO
and tech debt
Future proof cloud-ready
architecture
27. Confluent offers a
way to modernize your data architecture and
accelerates your developer velocity
Build a modern
architecture
Embrace modern
DevOps and easily
deploy, manage, and
scale your
infrastructure,
whether on your
private cloud or a
public cloud
Easily replay messages,
provide event sourcing
and command sourcing
for error handling and
eliminate complexity for
resending messages
Use best-of-breed
solutions across your
architecture and
easily integrate them
with Confluent
connectors, reducing
your reliance on a
single infrastructure
vendor
Cloud-native
design
Message
replay
Simple
integrations
Confluent offers a
way to modernize your
data architecture and
accelerates your
developer velocity
28. Legacy messaging brokers are
monolithic
Kafka supports containerization
and multi-DC deployments
Broker Broker Broker
Producers
Consumers
Broker
Producers
Consumers
More point of failure, lack of
horizontal scalability & higher
risk of downtime: poor fit for
mission-critical use cases
Fault tolerant and horizontally
scalable: ensures high availability
for mission-critical apps and more
cloud-native experience
Confluent offers a
way to modernize your
data architecture and
accelerates your
developer velocity
Build a modern
architecture
29. Confluent offers a
way to modernize your
data architecture and
accelerates your
developer velocity
Build a modern
architecture ksqlDB provides everything you need to build a
complete real-time application entirely with SQL
syntax
DB
APP
APP
DB
PULL
PUSH
CONNECTORS
STREAM
PROCESSING
MATERIALIZED
VIEWS
ksqlDB
1 2
APP
30. Confluent offers a
way to modernize your
data architecture and
accelerates your
developer velocity
Build a modern
architecture
App 1
!
Schema
Registry
Kafka
topic
!
Serializer
App 1
Deserializer
Data discovery
• Validate data compatibility and get warnings
• Let developers focus on deploying apps
Scale with confidence
• Store a versioned history of all schemas
• Enable evolution of schemas while preserving
backwards compatibility for existing consumers
31. Confluent was designed
for high performance
and high availability at
massive scale to support
your mission-critical
use cases
Confluent can support all of your enterprise’s streaming
use cases by achieving a dramatically higher throughput
15x improvement in
throughput performance
One platform to
deploy, secure,
and manage to
support all of
your streaming
workloads.
Read more about our internal performance benchmarking
Achieve any
scale
32. Confluent was designed for high performance at massive
scale to support your mission-critical use cases
5ms
Confluent achieves
<5ms latency at
massive throughput
Synchronize data across your
organization in real-time
Take action on insights from
your data immediately
Remove data silos by moving
from batch to data streaming
Confluent was designed
for high performance at
massive scale to support
your mission-critical
use cases
Achieve any
scale
33. We offer a robust set of
connectors to pull data from
your MQs into Confluent...
...and connectors to push data
from Confluent into your
modern, cloud-native sinks
Confluent provides the
tools and services
required to effectively
migrate from your
messaging queue
Migrate with
ease
35. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Current approach to Messaging
Third Party Consumer Apps
Integration layer
Producer Apps
Consumer Apps
Messaging middleware
36. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Step 1: Use pre-built connectors to start integrating
data from your applications
Schema Registry
ksqlDB
Producer Apps
Consumer Apps
Messaging middleware Third Party Consumer Apps
37. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Step 2: Refactor applications to use JMS APIs and replace
messaging middleware over time
JMS API
Producer Apps
Consumer Apps
JMS API
Schema Registry
ksqlDB
Third Party Consumer Apps
38. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc.
Step 3: Move off JMS and build streaming applications
with new event-driven patterns
Kafka API
Producer Apps
Consumer Apps
Push
Schema Registry
ksqlDB
Third Party Consumer Apps
Pull
New Event-driven patterns
● Event notification
● Event carried state storage
● Event sourcing
● CQRS (Command Query
Responsibility Segregation)
Unify and simplify
✓ Lower TCO
✓ ✓ Future-proof data architecture