Databricks used to use a static manually maintained wiki page for internal data exploration. We will discuss how we leverage Amundsen, an open source data discovery tool from Linux Foundation AI & Data, to improve productivity with trust by surfacing the most relevant dataset and SQL analytics dashboard with its important information programmatically at Databricks internally.
We will also talk about how we integrate Amundsen with Databricks world class infrastructure to surface metadata including:
Surface the most popular tables used within Databricks
Support fuzzy search and facet search for dataset- Surface rich metadata on datasets:
Lineage information (downstream table, upstream table, downstream jobs, downstream users)
Dataset owner
Dataset frequent users
Delta extend metadata (e.g change history)
ETL job that generates the dataset
Column stats on numeric type columns
Dashboards that use the given dataset
Use Databricks data tab to show the sample data
Surface metadata on dashboards including: create time, last update time, tables used, etc
Last but not least, we will discuss how we incorporate internal user feedback and provide the same discovery productivity improvements for Databricks customers in the future.
2. Who
Tao Feng
▪ Engineer at Databricks
▪ Co Creator of Amundsen
▪ Apache Airflow PMC
▪ Previously worked at Lyft, Linkedin,
Oracle
Tianru Zhou
▪ Engineer at Databricks
▪ Previously worked at AWS
Elasticsearch
4. Data-Driven Decisions
Analysts Data Scientists General
Managers
Engineers Experimenters
Product
Managers
● Axiom: Good decisions are based on data
● Who needs Data? Anyone who wants to make good decisions
○ HR wants to ensure salaries are competitive with market
○ Politician wants to optimize campaign strategy
5. Data-Driven Decisions
1. Data is Collected
2. Analyst Finds the Data
3. Analyst Understands the Data
4. Analyst Creates Report
5. Analyst Shares the Results
6. Someone Makes a Decision
6. Data Discovery Not Productive
● Data Scientists spend up to 30% of their
time in Data Discovery
● Data Discovery in itself provides little to
no intrinsic value. Impactful work
happens in Analysis.
● The answer to these problems is
Metadata / Data Catalog
7. Data Catalog to the rescue
• Ease of documentation and discoverability
‒ Single searchable portal
‒ Display dependencies / lineages between data entities ( tables,
dashboards)
• Help to answer questions like:
‒ Where can I find data about ___?
‒ What is the context about the data?
‒ Who are the owners that I can ask for access?
‒ How is the data created? Is the data trustable?
‒ How should i use the data? Any sample query, statistics around the
column?
‒ How frequently does the data refresh?
‒ ...
9. What is Amundsen
• In a nutshell, Amundsen is an open-source data discovery and metadata
platform for improving the productivity of data analysts, data scientists,
and engineers when interacting with data.
• Amundsen is currently hosted at Linux Foundation Data & AI (fromer
LFAI) as its incubation project with open governance and RFC process.
(e.g blog post)
18. Central data quality issue portal
• Central portal for users to
report data issues.
• Users could see all the past
issues as well.
• Users could request further
context / descriptions from
owners through the portal.
19. Data Preview
• Supports data preview for
datasets.
• Plugin client with different BI Viz
tools (e.g Apache Superset,
Bigquery).
22. Databricks Lakehouse
BI Reports &
Dashboards
Data
Science
Workspace
Machine
Learning
Lifecycle
Structured, Semi-Structured and Unstructured Data
DELTA ENGINE
Structured
transaction layer
High performance
query engine
23. Internal dataset discovery at Databricks
● Static maintained wiki
page for golden tables of
the central workspace
● Metadata easily
becomes stale
● Amundsen for the
rescue!
28. Metadata surfaced in amunden
• Downstream/Upstream tables
• Downstream jobs
• Downstream users of the table
• Job that writes the table
• Writer of the table
• Column stats
• Dataset frequent users
• Delta table extended metadata
• Redash Dashboards
• Sample data
Lineage information
Statistics
Extended information
31. How is the lineage table generated?
Raw lineage pipeline Raw -> processed lineage
Usage_logs
ReadEventTable (reads)
WriteEventTable (writes)
Insights_table
Cleaning + workload aggregation
Graph
Read <-> Workload <-> Write
Raw Lineage table
With raw table paths
dbfs:/user/hive/… → db.table
String processing
Paths → View conversion
Get Delta metadata (Describe Extended) +
String processing + heuristics
Mount point → Blob path
Get mount points (dbutils.fs.mounts()) +
String processing
Processed Lineage table
With table Names
32. Statistics information
Column statistics for numeric data
type
Frequent users
Raw usage data also
comes from usage_logs
table
analyze {table} compute statistics for column col1, col2
describe extended {db}.{table} `{column name}`
Get column stats
33. Delta table extended metadata
For delta table, we can run:
describe detail table_name
For delta table view, we can
run:
describe detail table_name
Extract extended metadata
43. Notable RFCs / PRs
● AWS Neptune metadata datastore (RFC#13)
● Mysql metadata datastore (RFC#019, RFC#021, RFC#023)
● Lineage frontend and backend (RFC#025, RFC#032)
● ETL push model paradigm (PR)
● Other rfcs could be found in here