Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. How can me make sure that all these event are accepted and forwarded in an efficient and reliable way? This is where Apache Kafaka comes into play, a distirbuted, highly-scalable messaging broker, build for exchanging huge amount of messages between a source and a target.
This session will start with an introduction into Apache and presents the role of Apache Kafka in a modern data / information architecture and the advantages it brings to the table. Additionally the Kafka ecosystem will be covered as well as the integration of Kafka in the Oracle Stack, with products such as Golden Gate, Service Bus and Oracle Stream Analytics all being able to act as a Kafka consumer or producer.
How to Troubleshoot Apps for the Modern Connected Worker
Apache Kafka - Scalable Message-Processing and more !
1. BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENF
HAMBURG KOPENHAGEN LAUSANNE MÜNCHEN STUTTGART WIEN ZÜRICH
Apache Kafka
Scalable Message Processing and more!
Guido Schmutz
@gschmutz guidoschmutz.wordpress.com
2. Guido Schmutz
Working at Trivadis for more than 20 years
Oracle ACE Director for Fusion Middleware and SOA
Consultant, Trainer Software Architect for Java, Oracle, SOA and
Big Data / Fast Data
Member of Trivadis Architecture Board
Technology Manager @ Trivadis
More than 30 years of software development experience
Contact: guido.schmutz@trivadis.com
Blog: http://guidoschmutz.wordpress.com
Slideshare: http://www.slideshare.net/gschmutz
Twitter: gschmutz
2 8.12.2016 Big Data & Fast Data
10. Apache Kafka - Overview
Distributed publish-subscribe messaging system
Designed for processing of real time activity stream data (logs, metrics
collections, social media streams, …)
Initially developed at LinkedIn, now part of Apache
Does not use JMS API and standards
Kafka maintains feeds of messages in topics
Apache Kafka - Scalable Message Processing and more!10
11. Apache Kafka - Motivation
LinkedIn’s motivation for Kafka was:
• “A unified platform for handling all the real-time data feeds a large company might
have.”
Must haves
• High throughput to support high volume event feeds.
• Support real-time processing of these feeds to create new, derived feeds.
• Support large data backlogs to handle periodic ingestion from offline systems.
• Support low-latency delivery to handle more traditional messaging use cases.
• Guarantee fault-tolerance in the presence of machine failures.
Apache Kafka - Scalable Message Processing and more!11
12. Kafka High Level Architecture
The who is who
• Producers write data to brokers.
• Consumers read data from
brokers.
• All this is distributed.
The data
• Data is stored in topics.
• Topics are split into partitions,
which are replicated.
Kafka Cluster
Consumer Consumer Consumer
Producer Producer Producer
Broker 1 Broker 2 Broker 3
Zookeeper
Ensemble
Apache Kafka - Scalable Message Processing and more!12
17. Durability Guarantees
Producer can configure acknowledgements
Value Impact Durability
0 • Producer doesn’t wait for leader weak
1 (default) • Producer waits for leader
• Leader sends ack when message written to log
• No wait for followers
medium
all • Producer waits for leader
• Leader sends ack when all In-Sync Replica have
acknowledged
strong
Apache Kafka - Scalable Message Processing and more!17
18. Apache Kafka - Partition offsets
Offset: messages in the partitions are each assigned a unique (per partition) and
sequential id called the offset
• Consumers track their pointers via (offset, partition, topic) tuples
Consumer Group A Consumer Group B
Apache Kafka - Scalable Message Processing and more!18
Source: Apache Kafka
19. Data Retention – 3 options
1. Never
2. Time based (TTL)
log.retention.{ms | minutes | hours}
3. Size based
log.retention.bytes
4. Log compaction based (entries with same key are removed)
kafka-topics.sh --zookeeper localhost:2181
--create --topic customers
--replication-factor 1 --partitions 1
--config cleanup.policy=compact
Apache Kafka - Scalable Message Processing and more!19
20. Apache Kafka – Some numbers
Kafka at LinkedIn => over 1800+ broker machines / 79K+ Topics
Kafka Performance at our own infrastructure => 6 brokers (VM) / 1 cluster
• 445’622 messages/second
• 31 MB / second
• 3.0405 ms average latency between producer / consumer
1.3 Trillion messages per
day
330 Terabytes in/day
1.2 Petabytes out/day
Peak load for a single cluster
2 million messages/sec
4.7 Gigabits/sec inbound
15 Gigabits/sec outbound
http://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines
https://engineering.linkedin.com/kafka/running-kafka-scale
Apache Kafka - Scalable Message Processing and more!20
21. Kafka Topics
Creating a topic
• Command line interface
• Using AdminUtils.createTopic method
• Auto-create via auto.create.topics.enable = true
Modifying a topic
https://kafka.apache.org/documentation.html#basic_ops_modify_topic
Deleting a topic
• Command Line interface
$ kafka-topics.sh –zookeeper zk1:2181 --create
--topic my.topic –-partitions 3
–-replication-factor 2 --config x=y
Apache Kafka - Scalable Message Processing and more!21
22. Inspecting the current state of a topic
Use the --describe option
• Leader: brokerID of the currently elected leader broker
• Replica ID’s = broker ID’s
• ISR = “in-sync replica”, replicas that are in sync with the leader. In this example:
• Broker 0 is leader for partition 1.
• Broker 1 is leader for partitions 0 and 2.
• All replicas are in-sync with their respective leader partitions.
$ kafka-topics.sh –zookeeper zk1:2181 –-describe --topic my.topic
Topic:zerg2.hydra PartitionCount:3 ReplicationFactor:2 Configs:
Topic: my.topic Partition: 0 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: my.topic Partition: 1 Leader: 0 Replicas: 0,1 Isr: 0,1
Topic: my.topic Partition: 2 Leader: 1 Replicas: 1,0 Isr: 1,0
Apache Kafka - Scalable Message Processing and more!22
28. Kafka Streams
• Designed as a simple and lightweight library in Apache Kafka
• no external dependencies on systems other than Apache Kafka
• Leverages Kafka as its internal messaging layer
• agnostic to resource management and configuration tools
• Supports fault-tolerant local state
• Event-at-a-time processing (not microbatch) with millisecond latency
• Windowing with out-of-order data using a Google DataFlow-like model
Apache Kafka - Scalable Message Processing and more!28
29. Kafka Streams - Architecture
Apache Kafka - Scalable Message Processing and more!29
topology defines the stream
processing computational logic for
your application
topology is a graph of stream
processors (nodes) that are
connected by streams (edges)
source processor is a stream
processor that does not have any
upstream processors
sink processor is a special type
of stream processor that does not
have down-stream processors.
Source: Confluent
30. Kafka Streams - Processor Topology
Apache Kafka - Scalable Message Processing and more!30
topology defines the stream processing
computational logic for your application
topology is a graph of stream processors
(nodes) that are connected by streams (edges)
source processor is a stream processor that
does not have any upstream processors.
Consumes one or Kafka topics.
sink processor is a special type of stream
processor that does not have down-stream
processors. Produces to a single Kafka topic.
Source: Confluent
31. Kafka and ”Big Data” / ”Fast Data”
Ecosystem
Apache Kafka - Scalable Message Processing and more!31
32. Kafka and the Big Data / Fast Data ecosystem
Kafka integrates with many popular
products / frameworks
• Apache Spark Streaming
• Apache Flink
• Apache Storm
• Apache NiFi
• Streamsets
• Apache Flume
• Oracle Stream Analytics
• Oracle Service Bus
• Oracle GoldenGate
• Spring Integration Kafka Support
• …Storm built-in Kafka Spout to consume events from Kafka
Apache Kafka - Scalable Message Processing and more!32
36. Weather
Data
SQL Import
Hadoop Clusterd
Hadoop Cluster
Hadoop Cluster
Location
Social
Click
stream
Sensor
Data
Billing &
Ordering
CRM /
Profile
Marketing
Campaigns
Call
Center
Mobile
Apps
Batch Analytics
Streaming Analytics
Event
Hub
Event
Hub
Event
Hub
NoSQL
Parallel
Processing
Distributed
Filesystem
Stream Analytics
NoSQL
Reference /
Models
SQL
Search
Dashboard
BI Tools
Enterprise Data
Warehouse
Search
Online & Mobile
Apps
Customer Event Hub – mapping of technologies
Apache Kafka - Scalable Message Processing and more!36
37. Summary
• Kafka can scale to millions of messages per second, and more
• Easy to start with for a PoC
• A bit more to invest to setup production environment
• Monitoring is key
• Vibrant community and ecosystem
• Fast pace technology
• Confluent provides Kafka Distribution
Apache Kafka - Scalable Message Processing and more!37