SlideShare uma empresa Scribd logo
1 de 36
© Cloudera, Inc. All rights reserved.
DRUID AND HIVE TOGETHER
USE CASES AND BEST PRACTICES
Nishant Bangarwa
© Cloudera, Inc. All rights reserved. 2
AGENDA
Motivation
Introduction to Druid
Hive and Druid
Performance Numbers
Demo
© Cloudera, Inc. All rights reserved. 3
Database popularity trend in last 24 months
© Cloudera, Inc. All rights reserved. 4
Challenges with specialized DBs
• Each specialized DB has different dialects and API
• Diverse security and audit mechanisms
• Different governance models
• Data from different sources needs to be combined at client side
• Need a solution to provide performance without added complexity
© Cloudera, Inc. All rights reserved. 5
Query Federation with Apache Hive
Extensible Storage Handler
• Input Format
• Output Format
• SerDe
• Rules for pushing computations
• Filters, Aggregates, Sort, Limit etc..
• Transform from SQL to special dialects
© Cloudera, Inc. All rights reserved. 6
Introduction to Apache Druid
High performance analytics data store for timeseries data
© Cloudera, Inc. All rights reserved. 7
Companies Using Druid
http://druid.io/druid-powered
© Cloudera, Inc. All rights reserved. 8
When to use Druid ?
• Event Data/ Timeseries data
• Realtime – Need to analyze events as they happen.
• Delays can lead to business loss e.g. Fraud Detection
• High Data Ingestion rate
• Scalable horizontally
• Queries generally involve aggregations and filtering on time
• Results for last quarter
• Aggregate comparisons over time, this week compared to last week etc.
• Result set is much smaller than the actual dataset being queried
© Cloudera, Inc. All rights reserved. 9
Common Use Cases
• User activity and behavior analysis
• clickstreams, viewstreams and activity streams
• measuring user engagement, tracking A/B test data for product releases, and
understanding usage patterns
• Application performance management
• operational data generated by applications
• identify bottlenecks and troubleshoot issues in Realtime
• IoT and device metrics
• Ingest machine generated data in real-time
• optimize hardware resources, identify issues, anomaly detection.
• Digital marketing
• understand advertising campaign performance, click through rates, conversion rates
© Cloudera, Inc. All rights reserved. 10
When NOT to use Druid ?
• updating existing records using a primary key
• updates need to be done via Rebuilding Segments (Re-Ingestion)
• Queries involve dumping entire dataset
• joining one big fact table to another big fact table
• query latency is not very important for business use case
• offline reporting system
© Cloudera, Inc. All rights reserved. 11
Key Druid Features
• Column-oriented Storage
• Sub-Second query times
• Arbitrary slicing and dicing of data
• Native Search Indexes
• Horizontally Scalable
• Streaming and Batch Ingestion
• Automatic Data Summarization
• Time based partition
• Flexible Schemas
• Rolling Upgrades
© Cloudera, Inc. All rights reserved. 12
Druid Concepts
Time Based Partitioning
1. Time partitioned Segment Files
2. Segments are versioned to support batch overrides
3. By Segment Query Results are Cached
Segment 5_1:
version1
Friday
Time
Segment 1:
version1
Monday
Segment 2:
version1
Tuesday
Segment 3:
version2
Wednesday
Segment 4:
version1
Thursday
Segment 5_2:
version1
Friday
© Cloudera, Inc. All rights reserved. 13
Druid Architecture
Realtime
Nodes
Historical
Nodes
Batch
Data Historical
Nodes
Broker
Nodes
Realtime
Index
Tasks
Streaming
Data
Historical
Nodes
Handoff
© Cloudera, Inc. All rights reserved. 15
Apache Hive and Apache Druid
• Large Scale Queries
• Joins, Subqueries
• Windowing Functions
• Transformations
• Complex Aggregations
• Advanced Sorting
• UDFs
• Queries to power visualizations
• Needles-in-a-haystack
• Dimensional Aggregates
• TopN queries
• Timeseries Queries
• Min/Max Values
• Streaming Ingestion
© Cloudera, Inc. All rights reserved. 16
Integration Benefits
1. Streaming Ingestion
2. Single SQL dialect and API
3. Central security controls and audit trail
4. Unified governance
5. Ability to combine data from multiple sources
6. Data independence
© Cloudera, Inc. All rights reserved. 17
Druid data sources in Hive
Registering Existing Druid data sources
Simple CREATE EXTERNAL TABLE statement
CREATE EXTERNAL TABLE druid_table_1
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES ("druid.datasource" = "wikiticker");
Hive table name
Hive storage handler classname
Druid data source name
⇢ Broker node endpoint specified as a Hive configuration parameter
⇢ Automatic Druid data schema discovery: segment metadata query
© Cloudera, Inc. All rights reserved. 18
Druid data sources in Hive
Creating Druid data sources
Use Create Table As Select (CTAS) statement
CREATE EXTERNAL TABLE druid_table
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES ("druid.segment.granularity" = "DAY”) AS
SELECT time, page, user, c_added, c_removed FROM src;
Hive table name
Hive storage handler classname
Druid segmentgranularity
⇢ Inference of Druid column types (timestamp,dimensions,metrics)dependson Hivecolumntype
© Cloudera, Inc. All rights reserved. 19
Druid data sources in Hive
File Sink operator uses Druid output format
– Creates segment files and register them in Druid
– Data needs to be partitioned by time granularity
• Granularity specified as configuration parameter
Optional Data Summarization
Original CTAS
physical plan
__time page user c_added c_removed
2011-01-01T01:05:00Z Justin Boxer 1800 25
2011-01-02T19:00:00Z Justin Reach 2912 42
2011-01-01T11:00:00Z Ke$ha Xeno 1953 17
2011-01-02T13:00:00Z Ke$ha Helz 3194 170
2011-01-02T18:00:00Z Miley Ashu 2232 34
CTAS query results
File Sink
Select
Table Scan
© Cloudera, Inc. All rights reserved. 20
Druid data sources in Hive
File Sink operator uses Druid output format
– Creates segment files and register them in Druid
– Data needs to be partitioned by time granularity
• Granularity specified as configuration parameter
Optional Data Summarization
Rewritten CTAS
physical plan
CTAS query results
File Sink
Select
Table Scan
__time page user c_added c_removed __time_granularity
2011-01-01T01:05:00Z Justin Boxer 1800 25 2011-01-01T00:00:00Z
2011-01-02T19:00:00Z Justin Reach 2912 42 2011-01-02T00:00:00Z
2011-01-01T11:00:00Z Ke$ha Xeno 1953 17 2011-01-01T00:00:00Z
2011-01-02T13:00:00Z Ke$ha Helz 3194 170 2011-01-02T00:00:00Z
2011-01-02T18:00:00Z Miley Ashu 2232 34 2011-01-02T00:00:00Z
Reduce
© Cloudera, Inc. All rights reserved. 21
Druid data sources in Hive
Creating Streaming Druid data sources
Use Create Table As Select (CTAS) statement
CREATE EXTERNAL TABLE druid_streaming
(`__time` timestamp,`dimension1` string`metric1` int, `metric2 double, Etc.. )
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES ( "druid.segment.granularity" = "DAY”,
"kafka.bootstrap.servers" = "localhost:9092", "kafka.topic" = "topic1");
Hive table name
Hive storage handler classname
Druid segmentgranularity
Kafka related properties
© Cloudera, Inc. All rights reserved. 22
Druid data sources in Hive
Managing Streaming Ingestion from Hive
Use Alter Table statement
ALTER TABLE druid_streaming SET TBLPROPERTIES('druid.kafka.ingestion' = 'START’);
ALTER TABLE druid_streaming SET TBLPROPERTIES('druid.kafka.ingestion' = 'STOP’);
ALTER TABLE druid_streaming t SET TBLPROPERTIES('druid.kafka.ingestion' = 'RESET');
Hive table name
Kafka related properties
⇢Reset will reset the offsets maintained by druid for ingestion
© Cloudera, Inc. All rights reserved. 23
Querying Druid datasources
• Automatic rewriting when query is expressed over Druid table
– Powered by Apache Calcite
– Main challenge: identify patterns in logical plan corresponding to different
kinds of Druid queries (Timeseries, TopN, GroupBy, Select)
• Translate (sub)plan of operators into valid Druid JSON query
– Druid query is encapsulated within Hive TableScan operator
• Hive TableScan uses Druid input format
– Submits query to Druid and generates records out of the query results
• It might not be possible to push all computation to Druid
– Our contract is that the query should always be executed
© Cloudera, Inc. All rights reserved. 24
Querying Druid datasources
Apache Hive - SQL query
SELECT `user`, sum(`c_added`) AS s
FROM druid_table_1
WHERE EXTRACT(year FROM ` time`)
BETWEEN 2010 AND 2011
GROUP BY `user`
ORDER BY s DESC
LIMIT 10;
• Top 10 users that have added more characters from
beginning of 2010 until the end of 2011
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
© Cloudera, Inc. All rights reserved. 25
Querying Druid datasources
Apache Hive - SQL query
SELECT `user`, sum(`c_added`) AS s
FROM druid_table_1
WHERE EXTRACT(year FROM ` time`)
BETWEEN 2010 AND 2011
GROUP BY `user`
ORDER BY s DESC
LIMIT 10;
• Initial Plan:
– Scan is executed in Druid (select query)
– Rest of the query is executed in Hive
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
Apache Hive
Druid query
© Cloudera, Inc. All rights reserved. 26
Querying Druid datasources
Apache Hive - SQL query
SELECT `user`, sum(`c_added`) AS s
FROM druid_table_1
WHERE EXTRACT(year FROM ` time`)
BETWEEN 2010 AND 2011
GROUP BY `user`
ORDER BY s DESC
LIMIT 10;
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
• Rewriting rules push computation into Druid
– Need to check that operator meets some
pre-conditions before pushing it to Druid
Apache Hive
Druid query
© Cloudera, Inc. All rights reserved. 27
Querying Druid datasources
Apache Hive - SQL query
SELECT `user`, sum(`c_added`) AS s
FROM druid_table_1
WHERE EXTRACT(year FROM ` time`)
BETWEEN 2010 AND 2011
GROUP BY `user`
ORDER BY s DESC
LIMIT 10;
• Rewriting rules push computation into Druid
– Need to check that operator meets some
pre-conditions before pushing it to Druid
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
Rewriting Rule
Apache Hive
Druid query
select
© Cloudera, Inc. All rights reserved. 28
Querying Druid datasources
Apache Hive - SQL query
SELECT `user`, sum(`c_added`) AS s
FROM druid_table_1
WHERE EXTRACT(year FROM ` time`)
BETWEEN 2010 AND 2011
GROUP BY `user`
ORDER BY s DESC
LIMIT 10;
• Rewriting rules push computation into Druid
– Need to check that operator meets some
pre-conditions before pushing it to Druid
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
Rewriting Rule
Apache Hive
Druid query
select
© Cloudera, Inc. All rights reserved. 29
Querying Druid datasources
Apache Hive - SQL query
SELECT `user`, sum(`c_added`) AS s
FROM druid_table_1
WHERE EXTRACT(year FROM ` time`)
BETWEEN 2010 AND 2011
GROUP BY `user`
ORDER BY s DESC
LIMIT 10;
• Rewriting rules push computation into Druid
– Need to check that operator meets some
pre-conditions before pushing it to Druid
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
Rewriting Rule
Apache Hive
Druid query
groupBy
© Cloudera, Inc. All rights reserved. 30
Querying Druid datasources
Apache Hive - SQL query
SELECT `user`, sum(`c_added`) AS s
FROM druid_table_1
WHERE EXTRACT(year FROM ` time`)
BETWEEN 2010 AND 2011
GROUP BY `user`
ORDER BY s DESC
LIMIT 10;
• Rewriting rules push computation into Druid
– Need to check that operator meets some
pre-conditions before pushing it to Druid
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
Rewriting Rule Apache Hive
Druid query
groupBy
© Cloudera, Inc. All rights reserved. 31
Querying Druid datasources
Filter
Project
Druid Scan
Sink
Sort Limit
Aggregate
Query Logical Plan
Apache Hive
Druid query
groupBy
{
"queryType": "groupBy", DruidJSON
query
"dataSource":
"users_index",
"granularity": "all",
"dimension":
"user",
"aggregations":[ { "type": "longSum","name":"s","fieldName":"c_added"} ],
"limitSpec":{
"limit":10,
"columns":[ {"dimension":"s","direction": "descending"} ]
},
"intervals":[ "2010-01-01T00:00:00.000/2012-01-01T00:00:00.000"]
}
File Sink Druid Scan
Query Physical Plan
© Cloudera, Inc. All rights reserved. 32
Druid input format
• Submits query to Druid and generates records out of the query results
• Current version
– Timeseries, TopN, and GroupBy queries are not partitioned directly sent to druid broker
– Scan queries: realtime and historical nodes are contacted directly
Timeseries, TopN, GroupBy Select
Node
Table Scan
Record reader
Table Scan
Record reader
Table Scan
Record reader
Node Node
Table Scan
Record reader
… … …
…
Table Scan
Record reader
…
© Cloudera, Inc. All rights reserved. 33
Performance and Scalability: Fast Facts
Most Events per Day
300 Billion Events / Day
(Metamarkets)
Most Computed Metrics
1 Billion Metrics / Min
(Jolata)
Largest Cluster
200 Nodes
(Snap Inc)
Largest Hourly Ingestion
2TB per Hour
(Netflix)
© Cloudera, Inc. All rights reserved. 34
Performance Numbers
• Query Latency
• average - 500ms
• 90%ile < 1sec
• 95%ile < 5sec
• 99%ile < 10 sec
• Query Volume
• 1000s queries per minute
• Benchmarking code
• https://github.com/druid-
io/druid-benchmark
© Cloudera, Inc. All rights reserved. 35
Performance Numbers
SSB Benchmark 1TB Scale
© Cloudera, Inc. All rights reserved. 36
Useful Resources
• Druid website – http://druid.io
• Druid User Group - dev@druid.incubator.apache.org
• Druid Dev Group – users@druid.incubator.apache.org
• Hive Druid Integration -
https://cwiki.apache.org/confluence/display/Hive/Druid+Integration
• Blogs - https://hortonworks.com/blog/apache-hive-druid-part-1-3/
• Query Federation with Apache Hive - https://hortonworks.com/blog/query-
federation-with-hive/
© Cloudera, Inc. All rights reserved.
THANK YOU

Mais conteúdo relacionado

Mais procurados

Apache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data ProcessingApache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data Processing
DataWorks Summit
 

Mais procurados (20)

Apache Airflow
Apache AirflowApache Airflow
Apache Airflow
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
 
Apache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the Covers
 
Data Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Data Engineer's Lunch #83: Strategies for Migration to Apache IcebergData Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Data Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
 
Iceberg: a fast table format for S3
Iceberg: a fast table format for S3Iceberg: a fast table format for S3
Iceberg: a fast table format for S3
 
The Impala Cookbook
The Impala CookbookThe Impala Cookbook
The Impala Cookbook
 
Airflow presentation
Airflow presentationAirflow presentation
Airflow presentation
 
Apache Iceberg Presentation for the St. Louis Big Data IDEA
Apache Iceberg Presentation for the St. Louis Big Data IDEAApache Iceberg Presentation for the St. Louis Big Data IDEA
Apache Iceberg Presentation for the St. Louis Big Data IDEA
 
Orchestrating workflows Apache Airflow on GCP & AWS
Orchestrating workflows Apache Airflow on GCP & AWSOrchestrating workflows Apache Airflow on GCP & AWS
Orchestrating workflows Apache Airflow on GCP & AWS
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes OperatorIntroducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
 
Optimizing Hive Queries
Optimizing Hive QueriesOptimizing Hive Queries
Optimizing Hive Queries
 
CoC23_ Looking at the New Features of Apache NiFi
CoC23_ Looking at the New Features of Apache NiFiCoC23_ Looking at the New Features of Apache NiFi
CoC23_ Looking at the New Features of Apache NiFi
 
Performance Troubleshooting Using Apache Spark Metrics
Performance Troubleshooting Using Apache Spark MetricsPerformance Troubleshooting Using Apache Spark Metrics
Performance Troubleshooting Using Apache Spark Metrics
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Druid deep dive
Druid deep diveDruid deep dive
Druid deep dive
 
Apache hive introduction
Apache hive introductionApache hive introduction
Apache hive introduction
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
Building an analytics workflow using Apache Airflow
Building an analytics workflow using Apache AirflowBuilding an analytics workflow using Apache Airflow
Building an analytics workflow using Apache Airflow
 
Apache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data ProcessingApache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data Processing
 
Managing your Hadoop Clusters with Apache Ambari
Managing your Hadoop Clusters with Apache AmbariManaging your Hadoop Clusters with Apache Ambari
Managing your Hadoop Clusters with Apache Ambari
 

Semelhante a Druid and Hive Together : Use Cases and Best Practices

Unconference Round Table Notes
Unconference Round Table NotesUnconference Round Table Notes
Unconference Round Table Notes
Timothy Spann
 

Semelhante a Druid and Hive Together : Use Cases and Best Practices (20)

Get started with Cloudera's cyber solution
Get started with Cloudera's cyber solutionGet started with Cloudera's cyber solution
Get started with Cloudera's cyber solution
 
Cassandra Summit 2014: Internet of Complex Things Analytics with Apache Cassa...
Cassandra Summit 2014: Internet of Complex Things Analytics with Apache Cassa...Cassandra Summit 2014: Internet of Complex Things Analytics with Apache Cassa...
Cassandra Summit 2014: Internet of Complex Things Analytics with Apache Cassa...
 
Get Started with Cloudera’s Cyber Solution
Get Started with Cloudera’s Cyber SolutionGet Started with Cloudera’s Cyber Solution
Get Started with Cloudera’s Cyber Solution
 
The new big data
The new big dataThe new big data
The new big data
 
Big data journey to the cloud 5.30.18 asher bartch
Big data journey to the cloud 5.30.18   asher bartchBig data journey to the cloud 5.30.18   asher bartch
Big data journey to the cloud 5.30.18 asher bartch
 
Multi-Tenant Operations with Cloudera 5.7 & BT
Multi-Tenant Operations with Cloudera 5.7 & BTMulti-Tenant Operations with Cloudera 5.7 & BT
Multi-Tenant Operations with Cloudera 5.7 & BT
 
Stl meetup cloudera platform - january 2020
Stl meetup   cloudera platform  - january 2020Stl meetup   cloudera platform  - january 2020
Stl meetup cloudera platform - january 2020
 
Interactive Analytics at Scale in Apache Hive Using Druid
Interactive Analytics at Scale in Apache Hive Using DruidInteractive Analytics at Scale in Apache Hive Using Druid
Interactive Analytics at Scale in Apache Hive Using Druid
 
Hadoop Application Architectures tutorial at Big DataService 2015
Hadoop Application Architectures tutorial at Big DataService 2015Hadoop Application Architectures tutorial at Big DataService 2015
Hadoop Application Architectures tutorial at Big DataService 2015
 
Interactive Analytics at Scale in Apache Hive Using Druid
Interactive Analytics at Scale in Apache Hive Using DruidInteractive Analytics at Scale in Apache Hive Using Druid
Interactive Analytics at Scale in Apache Hive Using Druid
 
PartnerSkillUp_Enable a Streaming CDC Solution
PartnerSkillUp_Enable a Streaming CDC SolutionPartnerSkillUp_Enable a Streaming CDC Solution
PartnerSkillUp_Enable a Streaming CDC Solution
 
Hive 3 a new horizon
Hive 3  a new horizonHive 3  a new horizon
Hive 3 a new horizon
 
Unconference Round Table Notes
Unconference Round Table NotesUnconference Round Table Notes
Unconference Round Table Notes
 
Apache Druid Design and Future prospect
Apache Druid Design and Future prospectApache Druid Design and Future prospect
Apache Druid Design and Future prospect
 
What's New in Apache Hive 3.0?
What's New in Apache Hive 3.0?What's New in Apache Hive 3.0?
What's New in Apache Hive 3.0?
 
What's New in Apache Hive 3.0 - Tokyo
What's New in Apache Hive 3.0 - TokyoWhat's New in Apache Hive 3.0 - Tokyo
What's New in Apache Hive 3.0 - Tokyo
 
Microsoft Azure Big Data Analytics
Microsoft Azure Big Data AnalyticsMicrosoft Azure Big Data Analytics
Microsoft Azure Big Data Analytics
 
How DBAs can garner the power of the Oracle Public Cloud?
How DBAs can garner the  power of the Oracle Public  Cloud?How DBAs can garner the  power of the Oracle Public  Cloud?
How DBAs can garner the power of the Oracle Public Cloud?
 
Hive 3 - a new horizon
Hive 3 - a new horizonHive 3 - a new horizon
Hive 3 - a new horizon
 
002 Introducing Neo4j 5 for Administrators - NODES2022 AMERICAS Beginner 2 - ...
002 Introducing Neo4j 5 for Administrators - NODES2022 AMERICAS Beginner 2 - ...002 Introducing Neo4j 5 for Administrators - NODES2022 AMERICAS Beginner 2 - ...
002 Introducing Neo4j 5 for Administrators - NODES2022 AMERICAS Beginner 2 - ...
 

Mais de DataWorks Summit

HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
DataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
DataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
DataWorks Summit
 

Mais de DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 

Último

Último (20)

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 

Druid and Hive Together : Use Cases and Best Practices

  • 1. © Cloudera, Inc. All rights reserved. DRUID AND HIVE TOGETHER USE CASES AND BEST PRACTICES Nishant Bangarwa
  • 2. © Cloudera, Inc. All rights reserved. 2 AGENDA Motivation Introduction to Druid Hive and Druid Performance Numbers Demo
  • 3. © Cloudera, Inc. All rights reserved. 3 Database popularity trend in last 24 months
  • 4. © Cloudera, Inc. All rights reserved. 4 Challenges with specialized DBs • Each specialized DB has different dialects and API • Diverse security and audit mechanisms • Different governance models • Data from different sources needs to be combined at client side • Need a solution to provide performance without added complexity
  • 5. © Cloudera, Inc. All rights reserved. 5 Query Federation with Apache Hive Extensible Storage Handler • Input Format • Output Format • SerDe • Rules for pushing computations • Filters, Aggregates, Sort, Limit etc.. • Transform from SQL to special dialects
  • 6. © Cloudera, Inc. All rights reserved. 6 Introduction to Apache Druid High performance analytics data store for timeseries data
  • 7. © Cloudera, Inc. All rights reserved. 7 Companies Using Druid http://druid.io/druid-powered
  • 8. © Cloudera, Inc. All rights reserved. 8 When to use Druid ? • Event Data/ Timeseries data • Realtime – Need to analyze events as they happen. • Delays can lead to business loss e.g. Fraud Detection • High Data Ingestion rate • Scalable horizontally • Queries generally involve aggregations and filtering on time • Results for last quarter • Aggregate comparisons over time, this week compared to last week etc. • Result set is much smaller than the actual dataset being queried
  • 9. © Cloudera, Inc. All rights reserved. 9 Common Use Cases • User activity and behavior analysis • clickstreams, viewstreams and activity streams • measuring user engagement, tracking A/B test data for product releases, and understanding usage patterns • Application performance management • operational data generated by applications • identify bottlenecks and troubleshoot issues in Realtime • IoT and device metrics • Ingest machine generated data in real-time • optimize hardware resources, identify issues, anomaly detection. • Digital marketing • understand advertising campaign performance, click through rates, conversion rates
  • 10. © Cloudera, Inc. All rights reserved. 10 When NOT to use Druid ? • updating existing records using a primary key • updates need to be done via Rebuilding Segments (Re-Ingestion) • Queries involve dumping entire dataset • joining one big fact table to another big fact table • query latency is not very important for business use case • offline reporting system
  • 11. © Cloudera, Inc. All rights reserved. 11 Key Druid Features • Column-oriented Storage • Sub-Second query times • Arbitrary slicing and dicing of data • Native Search Indexes • Horizontally Scalable • Streaming and Batch Ingestion • Automatic Data Summarization • Time based partition • Flexible Schemas • Rolling Upgrades
  • 12. © Cloudera, Inc. All rights reserved. 12 Druid Concepts Time Based Partitioning 1. Time partitioned Segment Files 2. Segments are versioned to support batch overrides 3. By Segment Query Results are Cached Segment 5_1: version1 Friday Time Segment 1: version1 Monday Segment 2: version1 Tuesday Segment 3: version2 Wednesday Segment 4: version1 Thursday Segment 5_2: version1 Friday
  • 13. © Cloudera, Inc. All rights reserved. 13 Druid Architecture Realtime Nodes Historical Nodes Batch Data Historical Nodes Broker Nodes Realtime Index Tasks Streaming Data Historical Nodes Handoff
  • 14. © Cloudera, Inc. All rights reserved. 15 Apache Hive and Apache Druid • Large Scale Queries • Joins, Subqueries • Windowing Functions • Transformations • Complex Aggregations • Advanced Sorting • UDFs • Queries to power visualizations • Needles-in-a-haystack • Dimensional Aggregates • TopN queries • Timeseries Queries • Min/Max Values • Streaming Ingestion
  • 15. © Cloudera, Inc. All rights reserved. 16 Integration Benefits 1. Streaming Ingestion 2. Single SQL dialect and API 3. Central security controls and audit trail 4. Unified governance 5. Ability to combine data from multiple sources 6. Data independence
  • 16. © Cloudera, Inc. All rights reserved. 17 Druid data sources in Hive Registering Existing Druid data sources Simple CREATE EXTERNAL TABLE statement CREATE EXTERNAL TABLE druid_table_1 STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ("druid.datasource" = "wikiticker"); Hive table name Hive storage handler classname Druid data source name ⇢ Broker node endpoint specified as a Hive configuration parameter ⇢ Automatic Druid data schema discovery: segment metadata query
  • 17. © Cloudera, Inc. All rights reserved. 18 Druid data sources in Hive Creating Druid data sources Use Create Table As Select (CTAS) statement CREATE EXTERNAL TABLE druid_table STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ("druid.segment.granularity" = "DAY”) AS SELECT time, page, user, c_added, c_removed FROM src; Hive table name Hive storage handler classname Druid segmentgranularity ⇢ Inference of Druid column types (timestamp,dimensions,metrics)dependson Hivecolumntype
  • 18. © Cloudera, Inc. All rights reserved. 19 Druid data sources in Hive File Sink operator uses Druid output format – Creates segment files and register them in Druid – Data needs to be partitioned by time granularity • Granularity specified as configuration parameter Optional Data Summarization Original CTAS physical plan __time page user c_added c_removed 2011-01-01T01:05:00Z Justin Boxer 1800 25 2011-01-02T19:00:00Z Justin Reach 2912 42 2011-01-01T11:00:00Z Ke$ha Xeno 1953 17 2011-01-02T13:00:00Z Ke$ha Helz 3194 170 2011-01-02T18:00:00Z Miley Ashu 2232 34 CTAS query results File Sink Select Table Scan
  • 19. © Cloudera, Inc. All rights reserved. 20 Druid data sources in Hive File Sink operator uses Druid output format – Creates segment files and register them in Druid – Data needs to be partitioned by time granularity • Granularity specified as configuration parameter Optional Data Summarization Rewritten CTAS physical plan CTAS query results File Sink Select Table Scan __time page user c_added c_removed __time_granularity 2011-01-01T01:05:00Z Justin Boxer 1800 25 2011-01-01T00:00:00Z 2011-01-02T19:00:00Z Justin Reach 2912 42 2011-01-02T00:00:00Z 2011-01-01T11:00:00Z Ke$ha Xeno 1953 17 2011-01-01T00:00:00Z 2011-01-02T13:00:00Z Ke$ha Helz 3194 170 2011-01-02T00:00:00Z 2011-01-02T18:00:00Z Miley Ashu 2232 34 2011-01-02T00:00:00Z Reduce
  • 20. © Cloudera, Inc. All rights reserved. 21 Druid data sources in Hive Creating Streaming Druid data sources Use Create Table As Select (CTAS) statement CREATE EXTERNAL TABLE druid_streaming (`__time` timestamp,`dimension1` string`metric1` int, `metric2 double, Etc.. ) STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ( "druid.segment.granularity" = "DAY”, "kafka.bootstrap.servers" = "localhost:9092", "kafka.topic" = "topic1"); Hive table name Hive storage handler classname Druid segmentgranularity Kafka related properties
  • 21. © Cloudera, Inc. All rights reserved. 22 Druid data sources in Hive Managing Streaming Ingestion from Hive Use Alter Table statement ALTER TABLE druid_streaming SET TBLPROPERTIES('druid.kafka.ingestion' = 'START’); ALTER TABLE druid_streaming SET TBLPROPERTIES('druid.kafka.ingestion' = 'STOP’); ALTER TABLE druid_streaming t SET TBLPROPERTIES('druid.kafka.ingestion' = 'RESET'); Hive table name Kafka related properties ⇢Reset will reset the offsets maintained by druid for ingestion
  • 22. © Cloudera, Inc. All rights reserved. 23 Querying Druid datasources • Automatic rewriting when query is expressed over Druid table – Powered by Apache Calcite – Main challenge: identify patterns in logical plan corresponding to different kinds of Druid queries (Timeseries, TopN, GroupBy, Select) • Translate (sub)plan of operators into valid Druid JSON query – Druid query is encapsulated within Hive TableScan operator • Hive TableScan uses Druid input format – Submits query to Druid and generates records out of the query results • It might not be possible to push all computation to Druid – Our contract is that the query should always be executed
  • 23. © Cloudera, Inc. All rights reserved. 24 Querying Druid datasources Apache Hive - SQL query SELECT `user`, sum(`c_added`) AS s FROM druid_table_1 WHERE EXTRACT(year FROM ` time`) BETWEEN 2010 AND 2011 GROUP BY `user` ORDER BY s DESC LIMIT 10; • Top 10 users that have added more characters from beginning of 2010 until the end of 2011 Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan
  • 24. © Cloudera, Inc. All rights reserved. 25 Querying Druid datasources Apache Hive - SQL query SELECT `user`, sum(`c_added`) AS s FROM druid_table_1 WHERE EXTRACT(year FROM ` time`) BETWEEN 2010 AND 2011 GROUP BY `user` ORDER BY s DESC LIMIT 10; • Initial Plan: – Scan is executed in Druid (select query) – Rest of the query is executed in Hive Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan Apache Hive Druid query
  • 25. © Cloudera, Inc. All rights reserved. 26 Querying Druid datasources Apache Hive - SQL query SELECT `user`, sum(`c_added`) AS s FROM druid_table_1 WHERE EXTRACT(year FROM ` time`) BETWEEN 2010 AND 2011 GROUP BY `user` ORDER BY s DESC LIMIT 10; Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan • Rewriting rules push computation into Druid – Need to check that operator meets some pre-conditions before pushing it to Druid Apache Hive Druid query
  • 26. © Cloudera, Inc. All rights reserved. 27 Querying Druid datasources Apache Hive - SQL query SELECT `user`, sum(`c_added`) AS s FROM druid_table_1 WHERE EXTRACT(year FROM ` time`) BETWEEN 2010 AND 2011 GROUP BY `user` ORDER BY s DESC LIMIT 10; • Rewriting rules push computation into Druid – Need to check that operator meets some pre-conditions before pushing it to Druid Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan Rewriting Rule Apache Hive Druid query select
  • 27. © Cloudera, Inc. All rights reserved. 28 Querying Druid datasources Apache Hive - SQL query SELECT `user`, sum(`c_added`) AS s FROM druid_table_1 WHERE EXTRACT(year FROM ` time`) BETWEEN 2010 AND 2011 GROUP BY `user` ORDER BY s DESC LIMIT 10; • Rewriting rules push computation into Druid – Need to check that operator meets some pre-conditions before pushing it to Druid Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan Rewriting Rule Apache Hive Druid query select
  • 28. © Cloudera, Inc. All rights reserved. 29 Querying Druid datasources Apache Hive - SQL query SELECT `user`, sum(`c_added`) AS s FROM druid_table_1 WHERE EXTRACT(year FROM ` time`) BETWEEN 2010 AND 2011 GROUP BY `user` ORDER BY s DESC LIMIT 10; • Rewriting rules push computation into Druid – Need to check that operator meets some pre-conditions before pushing it to Druid Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan Rewriting Rule Apache Hive Druid query groupBy
  • 29. © Cloudera, Inc. All rights reserved. 30 Querying Druid datasources Apache Hive - SQL query SELECT `user`, sum(`c_added`) AS s FROM druid_table_1 WHERE EXTRACT(year FROM ` time`) BETWEEN 2010 AND 2011 GROUP BY `user` ORDER BY s DESC LIMIT 10; • Rewriting rules push computation into Druid – Need to check that operator meets some pre-conditions before pushing it to Druid Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan Rewriting Rule Apache Hive Druid query groupBy
  • 30. © Cloudera, Inc. All rights reserved. 31 Querying Druid datasources Filter Project Druid Scan Sink Sort Limit Aggregate Query Logical Plan Apache Hive Druid query groupBy { "queryType": "groupBy", DruidJSON query "dataSource": "users_index", "granularity": "all", "dimension": "user", "aggregations":[ { "type": "longSum","name":"s","fieldName":"c_added"} ], "limitSpec":{ "limit":10, "columns":[ {"dimension":"s","direction": "descending"} ] }, "intervals":[ "2010-01-01T00:00:00.000/2012-01-01T00:00:00.000"] } File Sink Druid Scan Query Physical Plan
  • 31. © Cloudera, Inc. All rights reserved. 32 Druid input format • Submits query to Druid and generates records out of the query results • Current version – Timeseries, TopN, and GroupBy queries are not partitioned directly sent to druid broker – Scan queries: realtime and historical nodes are contacted directly Timeseries, TopN, GroupBy Select Node Table Scan Record reader Table Scan Record reader Table Scan Record reader Node Node Table Scan Record reader … … … … Table Scan Record reader …
  • 32. © Cloudera, Inc. All rights reserved. 33 Performance and Scalability: Fast Facts Most Events per Day 300 Billion Events / Day (Metamarkets) Most Computed Metrics 1 Billion Metrics / Min (Jolata) Largest Cluster 200 Nodes (Snap Inc) Largest Hourly Ingestion 2TB per Hour (Netflix)
  • 33. © Cloudera, Inc. All rights reserved. 34 Performance Numbers • Query Latency • average - 500ms • 90%ile < 1sec • 95%ile < 5sec • 99%ile < 10 sec • Query Volume • 1000s queries per minute • Benchmarking code • https://github.com/druid- io/druid-benchmark
  • 34. © Cloudera, Inc. All rights reserved. 35 Performance Numbers SSB Benchmark 1TB Scale
  • 35. © Cloudera, Inc. All rights reserved. 36 Useful Resources • Druid website – http://druid.io • Druid User Group - dev@druid.incubator.apache.org • Druid Dev Group – users@druid.incubator.apache.org • Hive Druid Integration - https://cwiki.apache.org/confluence/display/Hive/Druid+Integration • Blogs - https://hortonworks.com/blog/apache-hive-druid-part-1-3/ • Query Federation with Apache Hive - https://hortonworks.com/blog/query- federation-with-hive/
  • 36. © Cloudera, Inc. All rights reserved. THANK YOU