O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

Webinar: Flink SQL in Action - Fabian Hueske

248 visualizações

Publicada em

Stream processing is rapidly adopted by the enterprise. While in the past, stream processing frameworks mostly provided Java or Scala-based APIs, stream processing with SQL is recently gaining a lot of attention because it makes stream processing accessible to non-programmers and significantly reduces the effort to solve common tasks.

About three years ago, the Apache Flink community started adding SQL support to process static and streaming data in a unified fashion. Today, Flink SQL powers production systems at Alibaba, Huawei, Lyft, and Uber. In this talk, I will discuss the current state of Flink’s SQL support and explain the importance of Flink’s unified approach to process static and streaming data. Once the basics are covered, I will present common real-world use cases ranging from low-latency ETL to pattern detection and demonstrate how easily they can be addressed by Flink SQL.

Publicada em: Tecnologia
  • Seja o primeiro a comentar

  • Seja a primeira pessoa a gostar disto

Webinar: Flink SQL in Action - Fabian Hueske

  1. 1. - FABIAN HUESKE, SOFTWARE ENGINEER APACHE FLINK® SQL IN Action
  2. 2. © 2019 dA Platform2 ABOUT ME • Apache Flink PMC member & ASF member ‒Contributing since day 1 at TU Berlin ‒Focusing on Flink’s relational APIs since ~3.5 years • Co-author of “Stream Processing with Apache Flink” ‒Expected release: April 2019
  3. 3. © 2019 dA Platform3 WHAT IS APACHE FLINK? Stateful computations over streams real-time and historic fast, scalable, fault tolerant, in-memory, event time, large state, exactly-once
  4. 4. © 2019 dA Platform4 HARDENED AT SCALE Streaming Platform Service billions messages per day A lot of Stream SQL Streaming Platform as a Service 3700+ container running Flink, 1400+ nodes, 22k+ cores, 100s of jobs, 3 trillion events / day, 20 TB state Fraud detection Streaming Analytics Platform 1000s jobs, 100.000s cores, 10 TBs state, metrics, analytics, real time ML, Streaming SQL as a platform
  5. 5. © 2019 dA Platform5 POWERED BY APACHE FLINK
  6. 6. © 2019 dA Platform6 FLINK’S POWERFUL ABSTRACTIONS Process Function (events, state, time) DataStream API (streams, windows) SQL / Table API (dynamic tables) Stream- & Batch Data Processing High-level Analytics API Stateful Event- Driven Applications val stats = stream .keyBy("sensor") .timeWindow(Time.seconds(5)) .sum((a, b) -> a.add(b)) def processElement(event: MyEvent, ctx: Context, out: Collector[Result]) = { // work with event and state (event, state.value) match { … } out.collect(…) // emit events state.update(…) // modify state // schedule a timer callback ctx.timerService.registerEventTimeTimer(event.timestamp + 500) } Layered abstractions to navigate simple to complex use cases
  7. 7. © 2019 dA Platform7 APACHE FLINK’S RELATIONAL APIS Unified APIs for batch & streaming data A query specifies exactly the same result regardless whether its input is static batch data or streaming data. tableEnvironment .scan("clicks") .groupBy('user) .select('user, 'url.count as 'cnt) SELECT user, COUNT(url) AS cnt FROM clicks GROUP BY user LINQ-style Table API ANSI SQL
  8. 8. © 2019 dA Platform8 QUERY TRANSLATION tableEnvironment .scan("clicks") .groupBy('user) .select('user, 'url.count as 'cnt) SELECT user, COUNT(url) AS cnt FROM clicks GROUP BY user Input data is bounded (batch) Input data is unbounded (streaming)
  9. 9. © 2019 dA Platform9 WHAT IF “CLICKS” IS A FILE? Clicks user cTime url Mary 12:00:00 https://… Bob 12:00:00 https://… Mary 12:00:02 https://… Liz 12:00:03 https://… user cnt Mary 2 Bob 1 Liz 1 SELECT user, COUNT(url) as cnt FROM clicks GROUP BY user Input data is read at once Result is produced at once
  10. 10. © 2019 dA Platform10 WHAT IF “CLICKS” IS A STREAM? user cTime url user cnt SELECT user, COUNT(url) as cnt FROM clicks GROUP BY user Clicks Mary 12:00:00 https://… Bob 12:00:00 https://… Mary 12:00:02 https://… Liz 12:00:03 https://… Bob 1 Liz 1 Mary 1Mary 2 Input data is continuously read Result is continuously updated The result is the
  11. 11. © 2019 dA Platform11 WHY IS STREAM-BATCH UNIFICATION IMPORTANT? • Usability ‒ ANSI SQL syntax: No custom “StreamSQL” syntax. ‒ ANSI SQL semantics: No stream-specific result semantics. • Portability ‒ Run the same query on bounded & unbounded data ‒ Run the same query on recorded & real-time data ‒ Bootstrapping query state or backfilling results from historic data • How can we achieve SQL semantics on streams? no w bounded query unbounded query pas t futur e bounded query start of the stream unbounded query
  12. 12. © 2019 dA Platform12 DATABASE SYSTEMS RUN QUERIES ON STREAMS • Materialized views (MV) are similar to regular views, but persisted to disk or memory ‒Used to speed-up analytical queries ‒MVs need to be updated when the base tables change • MV maintenance is very similar to SQL on streams ‒Base table updates are a stream of DML statements ‒MV definition query is evaluated on that stream ‒MV is query result and continuously updated
  13. 13. © 2019 dA Platform13 CONTINUOUS QUERIES IN FLINK • Core concept is a “Dynamic Table” ‒ Dynamic tables are changing over time • Queries on dynamic tables ‒ produce new dynamic tables (which are updated based on input) ‒ do not terminate • Stream ↔ Dynamic table conversions 13
  14. 14. © 2019 dA Platform14 STREAM ↔ DYNAMIC TABLE CONVERSIONS • Append Conversions ‒Records are always inserted/appended ‒No updates or deletes • Upsert Conversions ‒Records have a (composite) unique key ‒Records are upserted or deleted based on key • Changelog Conversions ‒Records are inserted or deleted ‒Update = delete old version + insert new version
  15. 15. © 2019 dA Platform15 SQL Feature Set in Flink 1.7.0 STREAMING ONLY • OVER / WINDOW ‒ UNBOUNDED / BOUNDED PRECEDING • INNER JOIN with time-versioned • MATCH_RECOGNIZE ‒ Pattern Matching/CEP (SQL:2016) BATCH ONLY • UNION / INTERSECT / EXCEPT • ORDER BY STREAMING & BATCH • SELECT FROM WHERE • GROUP BY [HAVING] ‒ Non-windowed ‒ TUMBLE, HOP, SESSION windows • JOIN ‒ Time-Windowed INNER + OUTER JOIN ‒ Non-windowed INNER + OUTER JOIN • User-Defined Functions ‒ Scalar ‒ Aggregation ‒ Table-valued
  16. 16. © 2019 dA Platform16 WHAT CAN I BUILD WITH THIS? • Data Pipelines ‒ Transform, aggregate, and move events in real-time • Low-latency ETL ‒ Convert and write streams to file systems, DBMS, K-V stores, indexes, … ‒ Ingest appearing files to produce streams • Stream & Batch Analytics ‒ Run analytical queries over bounded and unbounded data ‒ Query and compare historic and real-time data • Power Live Dashboards ‒ Compute and update data to visualize in real-time
  17. 17. © 2019 dA Platform17 SOUNDS GREAT! HOW CAN I USE IT? • Embed SQL queries in regular (Java/Scala) Flink applications ‒ Tight integration with DataStream and DataSet APIs ‒ Mix and match with other libraries (CEP, ProcessFunction, Gelly) ‒ Package and operate queries like any other Flink application • Run SQL queries via Flink’s SQL CLI Client ‒ Interactive mode: Submit query and inspect results ‒ Detached mode: Submit query and write results to sink systemc
  18. 18. © 2019 dA Platform18 SQL CLI CLIENT – INTERACTIVE QUERIES View Results Enter Query SELECT user, COUNT(url) AS cnt FROM clicks GROUP BY user Database / HDFS Event Log Query StateResult sChangelog or Table SQL CLI Client Catalog Optimizer CLI Result Server Submit Query Job
  19. 19. © 2019 dA Platform19 THE NEW YORK TAXI RIDES DATA SET • The New York City Taxi & Limousine Commission provides a public data set about past taxi rides in New York City • We can derive a streaming table from the data ‒ Each ride is represented by a start and an end event • Table: TaxiRides rideId: BIGINT // ID of the taxi ride taxiId: BIGINT // ID of the taxi driverId: BIGINT // ID of the driver isStart: BOOLEAN // flag for pick-up (true) or drop-off (false) event lon: DOUBLE // longitude of pick-up or drop-off location lat: DOUBLE // latitude of pick-up or drop-off location rowTime: TIMESTAMP // time of pick-up or drop-off event psgCnt: INT // number of passengers
  20. 20. © 2019 dA Platform20 SELECT psgCnt, COUNT(*) as cnt FROM TaxiRides WHERE isStart GROUP BY psgCnt ▪ Count rides per number of passengers. INSPECT TABLE / COMPUTE BASIC STATS SELECT * FROM TaxiRides ▪ Show the whole table.
  21. 21. © 2019 dA Platform21 SELECT area, isStart, TUMBLE_END(rowTime, INTERVAL '5' MINUTE) AS cntEnd, COUNT(*) AS cnt FROM (SELECT rowTime, isStart, toAreaId(lon, lat) AS area FROM TaxiRides) GROUP BY area, isStart, TUMBLE(rowTime, INTERVAL '5' MINUTE) ▪ Compute every 5 minutes for each area the number of departing and arriving taxis. IDENTIFY POPULAR PICK-UP / DROP-OFF LOCATIONS
  22. 22. © 2019 dA Platform22 SQL CLI CLIENT – DETACHED QUERIES SQL CLI Client Catalog Optimizer Database / HDFS Event Log Query Submit Query Job State CLI Result Server Database / HDFS Event Log Enter Query INSERT INTO sinkTable SELECT user, COUNT(url) AS cnt FROM clicks GROUP BY user Cluster & Job ID
  23. 23. © 2019 dA Platform23 SERVING A DASHBOARD Elastic Search Kafka INSERT INTO AreaCnts SELECT toAreaId(lon, lat) AS areaId, COUNT(*) AS cnt FROM TaxiRides WHERE isStart GROUP BY toAreaId(lon, lat)
  24. 24. © 2019 dA Platform24 THERE’S MORE TO COME… • Alibaba is contributing features of its Flink fork Blink back • Major improvements for SQL and Table API ‒Better coverage of SQL features: Full TPC-DS support ‒Better performance: 10x compared to current state ‒Better connectivity: External catalogs and connectors • Extending the scope of Table API ‒Expand support for user-defined functions ‒Add support for machine-learning learning pipelines & algorithm library
  25. 25. © 2019 dA Platform25 SUMMARY • Unification of stream and batch is important. • Flink’s SQL solves many streaming and batch use cases. • In production at Alibaba, Huawei, Lyft, Uber, and others. • Query deployment as application or via CLI • Get involved, discuss, and contribute!
  26. 26. THANK YOU! Available on O’Reilly Early Release!
  27. 27. THANK YOU! @fhueske @platform_da @ApacheFlink WE ARE HIRING da-platform.com/careers
  28. 28. © 2019 dA Platform28 SELECT pickUpArea, AVG(timeDiff(s.rowTime, e.rowTime) / 60000) AS avgDuration FROM (SELECT rideId, rowTime, toAreaId(lon, lat) AS pickUpArea FROM TaxiRides WHERE isStart) s JOIN (SELECT rideId, rowTime FROM TaxiRides WHERE NOT isStart) e ON s.rideId = e.rideId AND e.rowTime BETWEEN s.rowTime AND s.rowTime + INTERVAL '1' HOUR GROUP BY pickUpArea ▪ Join start ride and end ride events on rideId and compute average ride duration per pick-up location. AVERAGE RIDE DURATION PER PICK-UP LOCATION

×