The document discusses the development of an internal data pipeline platform at Indix to democratize access to data. It describes the scale of data at Indix, including over 2.1 billion product URLs and 8 TB of HTML data crawled daily. Previously, the data was not discoverable, schemas changed and were hard to track, and using code limited who could access the data. The goals of the new platform were to enable easy discovery of data, transparent schemas, minimal coding needs, UI-based workflows for anyone to use, and optimized costs. The platform developed was called MDA (Marketplace of Datasets and Algorithms) and enabled SQL-based workflows using Spark. It has continued improving since its first release in 2016
4. Enabling businesses to build
location-aware software.
~3.6 million websites use Google maps
Enabling businesses to build
product-aware software.
Indix catalogs over 2.1 billion product offers
Indix - The “Google Maps” of Products
5. Crawling Pipeline
Data PipelineML
AggregateMatchStandardizeExtract AttributesClassifyDedupe
Parse
Crawl
Data
CrawlSeed
Brand & Retailer
Websites
Feeds Pipeline
Transform Clean Connect
Feed
Data
Brand & Retailer
Feeds
Indix Product
Catalog
Customizable
Feeds
Search &
Analytics
Index
Indexing PipelineReal Time
Index Analyze Derive Join
API
(Bulk &
Synchronous)
Product Data
Transformation
Service
Data Pipeline @Indix
12. Data wasn’t discoverable
● The biggest problem was in knowing what data exists and
where.
● Some of the data was in S3. Some in HDFS. Some in
Google sheets.
● There was no way to know how frequently and when the
data changed or updated.
13. The schema wasn’t readily
known
● The schema of the data, as expected, kept changing and it
was difficult to keep track of which version of data had
which schema.
● While Thrift and Avro alleviate this to an extent, access
to data wasn’t simple, especially for non-engineers.
14. Writing code limited scope
● We use Scalding and Spark for our MR jobs. Having to
code and tweak the jobs limited the scope of who can
write and run these jobs.
● “Readymade” jobs may not enable desired tweaks if
needed, affecting productivity and increasing
dependencies.
● Having to write code and ship jars hinders adhoc data
experimentation.
15. Cost control wasn’t trivial
● While data came in various sizes and shapes, what people
did with the data also varied - some use cases needed
sample of the data, while others wanted aggregations on
the entire data.
● It wasn’t trivial to handle all the different workloads while
minimizing costs.
● There was also the problem of adhoc jobs starving
production jobs in our existing Hadoop clusters.
16. Goals of Internal Data
Pipeline Platform
Enable easy discovery of
data.
Allow Schema to be
transparent and easy to
create while also allowing
introspection.
Minimal coding - have
prebuilt transformations for
common tasks and enable
SQL based workflow.
17. Goals of Internal Data
Pipeline Platform
UI and Wizard based
workflow to enable ANYONE
in the organization to run
pipelines and extract data.
Manage underlying clusters
and resources transparently
while optimizing for costs.
Support data
experimentations and also
production / customer use
cases.
21. MDA with our Data Pipeline
MatchAttributesBrandClassifyDedup
22. MDA with our Data Pipeline
MatchAttributesBrandClassifyDedup
Enrich Data Classify Brand
Feed data from
Customer
Feed output to
customer
23. MDA for ML Training Data
Filter Sample Preprocess
Training Data
24. Notebooks
//Setup the MDA client
import com.indix.holonet.core.client.SDKClient
val host = "holonet.force.io"
val port = 80
val client = SDKClient(host, port, spark)
//Create dataframe from any MDA dataset
val df = client.toDF("Indix", "PriceHistoryProfile")
df.show
25. Dec 2015
Start work on MDA
Mar 2016
First release
Lot more transforms
including sampling,
full Hive SQL support
and UX fixes
Late 2016
Performance
improvements, Spark
and infra upgrades.
June 2017
Ability to run pipelines
in customer’s cloud
infra
Jul 2016 Early 2017
Completely redesign
the UI based on over
year of feedback and
learnings. GraphQL for
the UI.
First closed preview of
MDA for a customer
Aug 2017
26. What does the future hold?
● We are far from done - things like automatic schema
inference, better caching are already planned.
● And as is the original vision, make it fully self-served for
our customers (internal and external.)
● Integration with other tools out there like Superset
● Open source as much as possible. First cut -
http://github.com/indix/sparkplug
27. Questions?
I blog at https://stacktoheap.com
Twitter and most other platforms @manojlds