O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Sink Your Teeth into Streaming at Any Scale

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Carregando em…3
×

Confira estes a seguir

1 de 30 Anúncio

Sink Your Teeth into Streaming at Any Scale

Baixar para ler offline

Sink Your Teeth into Streaming at Any Scale

https://www.scylladb.com/2023/01/26/scylladb-summit-for-the-scylladb-curious-serious-sea-monsters/

Sink Your Teeth into Streaming at Any Scale
Timothy Spann & David Kjerrumgaard, StreamNative

How to build a low-latency scalable platform for today’s massively data-intensive real-time streaming applications using ScyllaDB, Pulsar, and Flink.

Sink Your Teeth into Streaming at Any Scale

https://www.scylladb.com/2023/01/26/scylladb-summit-for-the-scylladb-curious-serious-sea-monsters/

Sink Your Teeth into Streaming at Any Scale
Timothy Spann & David Kjerrumgaard, StreamNative

How to build a low-latency scalable platform for today’s massively data-intensive real-time streaming applications using ScyllaDB, Pulsar, and Flink.

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Semelhante a Sink Your Teeth into Streaming at Any Scale (20)

Mais de Timothy Spann (20)

Anúncio

Mais recentes (20)

Sink Your Teeth into Streaming at Any Scale

  1. 1. Sink Your Teeth into Streaming at Any Scale David Kjerrumgaard, Developer Advocate at Stream Native Tim Spann, Developer Advocate at Stream Native
  2. 2. David Kjerrumgaard ■ Apache Pulsar Committer | Author of Pulsar In Action ■ Former Principal Software Engineer on Splunk’s messaging team that is responsible for Splunk’s internal Pulsar-as-a-Service platform. ■ Former Director of Solution Architecture at Streamlio. Your photo goes here, smile :)
  3. 3. Tim Spann ■ FLiP(N) Stack = Flink, Pulsar and NiFi Stack ■ Streaming Systems & Data Architecture Expert ■ DZone Iot and ML Top Expert ■ 15+ years of experience with streaming technologies including Pulsar, Flink, Spark, NiFi, Big Data, Cloud, MXNet, IoT, Python and more. ■ Today, he helps to grow the Pulsar community sharing rich technical knowledge and experience at both global conferences and through individual conversations.
  4. 4. ■ The Team to Stream ■ What is Apache Pulsar? ■ Join The Streams with Flink SQL ■ Fast Sink to ScyllaDB ■ Reference Architecture ■ Demo Presentation Agenda
  5. 5. Stream Team
  6. 6. Apps Building Real-Time Requires a Team
  7. 7. Apache Pulsar
  8. 8. Apache Pulsar is a Cloud-Native Messaging and Event-Streaming Platform.
  9. 9. CREATED Originally developed inside Yahoo! as Cloud Messaging Service GROWTH 10x Contributors 10MM+ Downloads Ecosystem Expands Kafka on Pulsar AMQ on Pulsar Functions . . . 2012 2016 2018 TODAY APACHE TLP Pulsar becomes Apache top level project. OPEN SOURCE Pulsar committed to open source. Apache Pulsar Timeline
  10. 10. Pulsar Growth Contributors 11X Growth Github Stars 5X Growth 10,000,000+ Docker Images Downloads
  11. 11. Streaming Consumer Consumer Consumer Subscription Shared Failover Consumer Consumer Subscription In case of failure in Consumer B-0 Consumer Consumer Subscription Exclusive X Consumer Consumer Key-Shared Subscription Pulsar Topic/Partition Messaging
  12. 12. Multi-Tenancy Model Tenants (Data Services) Namespace (Microservices) Pulsar Cluster Tenants (Marketing) Tenants (Compliance) Namespace (ETL) Namespace (Campaigns) Namespace (ETL) Namespace (Risk Assessment) Topic-1 (Cust Auth) Topic-1 (Location Resolution) Topic-2 (Demographics) Topic-1 (Budgeted Spend) Topic-1 (Acct History) Topic-2 (Risk Detection)
  13. 13. Integrated Schema Registry Schema Registry schema-1 (value=Avro/Protobuf/JSON) schema-2 (value=Avro/Protobuf/JSON) schema-3 (value=Avro/Protobuf/JSON) Schema Data ID Local Cache for Schemas + Schema Data ID + Local Cache for Schemas Send schema-1 (value=Avro/Protobuf/JSON) data serialized per schema ID Send (register) schema (if not in local cache) Read schema-1 (value=Avro/Protobuf/JSON) data deserialized per schema ID Get schema by ID (if not in local cache) Producers Consumers
  14. 14. Automatic Load Balancing Pulsar supports automatic load shedding whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less-loaded brokers.
  15. 15. Key Features Apache Pulsar Apache Kafka Message retention (time-based) ✔ ✔ Message replay ✔ ✔ Message retention (acknowledge-based) ✔ ✘ Built-in tiered storage ✔ ✘ Processing capabilities (fully managed) Pulsar Functions KStream Queue semantics (round robin) ✔ ✘ Queue semantics (key based) ✔ ✘ Dead letter queue ✔ ✘
  16. 16. Key Features Apache Pulsar Apache Kafka Scheduled and delayed delivery ✔ ✔ Rebalance-free scaling ✔ ✔ Elastically scalable ✔ ✘ Maximum number of topics Millions Up to 100k Built-in multi-tenancy ✔ ✘ Built-in geo-replication ✔ ✘ Built-in schema management ✔ ✘ End-to-end encryption ✔ ✘
  17. 17. Flink SQL
  18. 18. Pulsar-Flink Connector ■ Pulsar provides a connector that enables Flink to read & write data directly from/to Pulsar topics. ■ This makes Pulsar Topic data available for Flink SQL queries. ■ https://hub.streamnative.io/data-processing/pulsar-flink/1.15.1.1/
  19. 19. Flink SQL
  20. 20. Apache Flink ■ Unified computing engine ■ Batch processing is a special case of stream processing ■ Stateful processing ■ Massive Scalability ■ Flink SQL for queries, inserts against Pulsar Topics ■ Streaming Analytics ■ Continuous SQL ■ Continuous ETL ■ Complex Event Processing ■ Standard SQL Powered by Apache Calcite Apache Flink
  21. 21. Streaming Tables ■ A streaming table represents an unbounded sequence of structured data, i.e. “facts” ■ Facts in a streaming table are immutable, which means new events can be inserted into a table, but existing events can never be updated or deleted. ■ All the topics within a Pulsar namespace will automatically be mapped to streaming tables in a catalog configured to use a pulsar connector. ■ Streaming tables can also be created or deleted via DDL queries, where the underlying Pulsar topics will be created or deleted.
  22. 22. Sink to ScyllaDB
  23. 23. Data Offloaders (Tiered Storage) Client Libraries StreamNative Pulsar ecosystem hub.streamnative.io Connectors (Sources & Sinks) Protocol Handlers Pulsar Functions (Lightweight Stream Processing) Processing Engines … and more! … and more!
  24. 24. ScyllaDB Compatible Sink Connector pulsar.apache.org/docs/en/io-quickstart/ ScyllaDB Compatible Sink
  25. 25. David’s ScyllaDB Sink Update Connector automatically interrogates the database to determine the schema type definition at runtime. Includes a framework that extracts the values from the generic schema types (GenericRecord, and JSON String) Performance improvements & Bug Fixes PR: https://github.com/apache/pulsar/pull/16179
  26. 26. Building Real-Time Together Easily integrate ScyllaDB’s fast distributed database system and the StreamNative streaming platform with Pulsar sources and sinks via various protocols and libraries. Create high-speed streaming pipelines for log analysis, IoT sensing, and more.
  27. 27. Reference Architecture
  28. 28. Reference Architecture Microservices ETL
  29. 29. Reference Architecture ScyllaDB <-> Apache Pulsar -> Flink SQL -> Apache Pulsar -> Apache Pinot Flink SQL for Continuous Analytics, Ingest, Real-time Joins, Real-Time Analytics Apache Pulsar for routing, transformation, central messaging hub, data channels ScyllaDB for instant applications and massive data feeds Apache Pinot for low latency instant result queries
  30. 30. Thank You Stay in Touch Timothy Spann tim@streamnative.io @PaaSDev https://github.com/tspannhw https://www.linkedin.com/in/timothyspann/ David Kjerrumgaard david@streamnative.io @DavidKjerrumga1 https://github.com/david-streamlio https://www.linkedin.com/in/davidkj/

×