Apache Drill is new Apache incubator project. It's goal is to provide a distributed system for interactive analysis of large-scale datasets. Inspired by Google's Dremel technology, it aims to process trillions of records in seconds. We will cover the goals of Apache Drill, its use cases and how it relates to Hadoop, MongoDB and other large-scale distributed systems. We'll also talk about details of the architecture, points of extensibility, data flow and our first query languages (DrQL and SQL).
2. /home/gera: whoami
■ Saarland University
■ 1st intern in Immortal DB @ Microsoft Research
■ JMS, RDBMS HA @ Oracle
■ Hadoop MapReduce / Hadoop Core
■ Founding member of Apache Drill
3. ■ Open enterprise-grade distribution for Hadoop
● Easy, dependable and fast
● Open source with standards-based extensions
■ MapR is deployed at 1000’s of companies
● From small Internet startups to Fortune 100
■ MapR customers analyze massive amounts of data:
● Hundreds of billions of events daily
● 90% of the world’s Internet population monthly
● $1 trillion in retail purchases annually
■ MapR in the Cloud:
● partnered with Google: Hadoop on Google Compute Engine
● partnered with Amazon: M3/M5 options for Elastic Map Reduce
4. Agenda
■ What?
● What exactly does Drill do?
■ Why?
● Why do we need Apache Drill?
■ Who?
● Who is doing this?
■ How?
● How does Drill work inside?
■ Conclusion
● How can you help?
● Where can you find out more?
5. Apache Drill Overview
■ Drill overview
● Low latency interactive queries
● Standard ANSI SQL support
● Domain Specific Languages / Your own QL
■ Open-Source
● Apache Incubator
● 100’s involved across US and Europe
● Community consensus on API, functionality
6. Big Data Processing
Batch Interactive Stream
processing analysis processing
Milliseconds to
Query runtime Minutes to hours Never-ending
minutes
Data volume TBs to PBs GBs to PBs Continuous stream
Programming
MapReduce Queries DAG
model
Analysts and
Users Developers Developers
developers
Google
MapReduce Dremel
project
Open source Hadoop
Apache Drill Storm and S4
project MapReduce
8. Nested Query Languages
■ DrQL
● SQL-like query language for nested data
● Compatible with Google BigQuery/Dremel
● BigQuery applications should work with Drill
● Designed to support efficient column-based processing
● No record assembly during query processing
■ Mongo Query Language
● {$query: {x: 3, y: "abc"}, $orderby: {x: 1}}
■ Other languages/programming models can plug in
9. Nested Data Model
■ The data model in Dremel is Protocol Buffers
● Nested
● Schema
■ Apache Drill is designed to support multiple data models
● Schema: Protocol Buffers, Apache Avro, …
● Schema-less: JSON, BSON, …
■ Flat records are supported as a special case of nested data
● CSV, TSV, …
Avro IDL JSON
enum Gender { {
MALE, FEMALE "name": "Srivas",
} "gender": "Male",
"followers": 100
record User { }
string name; {
Gender gender; "name": "Raina",
long followers; "gender": "Female",
} "followers": 200,
"zip": "94305"
}
10. Extensibility
■ Nested query languages
● Pluggable model
● DrQL
● Mongo Query Language
● Cascading
■ Distributed execution engine
● Extensible model (eg, Dryad)
● Low-latency
● Fault tolerant
■ Nested data formats
● Pluggable model
● Column-based (ColumnIO/Dremel, Trevni, RCFile) and row-based (RecordIO,
Avro, JSON, CSV)
● Schema (Protocol Buffers, Avro, CSV) and schema-less (JSON, BSON)
■ Scalable data sources
● Pluggable model
● Hadoop
● HBase
11. Design Principles
Flexible Easy
● Pluggable query languages ● Unzip and run
● Extensible execution engine ● Zero configuration
● Pluggable data formats ● Reverse DNS not needed
● Column-based and row-based ● IP addresses can change
● Schema and schema-less ● Clear and concise log messages
● Pluggable data sources
● N(ot)O(nly) Hadoop
Dependable Fast
● No SPOF ● Minimum Java core
● Instant recovery from crashes ● C/C++ core with Java support
● Google C++ style guide
● Min latency and max throughput
(limited only by hardware)
13. Execution Engine
Operator layer is serialization-aware
Processes individual records
Execution layer is not serialization-aware
Processes batches of records (blobs/JSON trees)
Responsible for communication, dependencies and fault tolerance
14. DrQL Example
local-logs = donuts.json:
SELECT
{ ppu,
"id": "0003", typeCount =
"type": "donut",
COUNT(*) OVER PARTITION BY ppu,
"name": "Old Fashioned",
quantity =
"ppu": 0.55,
"sales": 300, SUM(sales) OVER PARTITION BY ppu,
"batters": sales =
{ SUM(ppu*sales) OVER PARTITION BY
"batter": ppu
[ FROM local-logs donuts
{ "id": "1001", "type": "Regular" },
{ "id": "1002", "type": "Chocolate" } WHERE donuts.ppu < 1.00
] ORDER BY dountuts.ppu DESC;
},
"topping":
[
{ "id": "5001", "type": "None" },
{ "id": "5002", "type": "Glazed" },
{ "id": "5003", "type": "Chocolate" },
{ "id": "5004", "type": "Maple" }
]
}
15. Query Components
■ User Query (DrQL) components:
● SELECT
● FROM
● WHERE
● GROUP BY
● HAVING
● (JOIN)
■ Logical operators:
● Scan
● Filter
● Aggregate
● (Join)