O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Simplify Data Conversion from Spark to TensorFlow and PyTorch

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Carregando em…3
×

Confira estes a seguir

1 de 16 Anúncio

Simplify Data Conversion from Spark to TensorFlow and PyTorch

Baixar para ler offline

In this talk, I would like to introduce an open-source tool built by our team that simplifies the data conversion from Apache Spark to deep learning frameworks.

Imagine you have a large dataset, say 20 GBs, and you want to use it to train a TensorFlow model. Before feeding the data to the model, you need to clean and preprocess your data using Spark. Now you have your dataset in a Spark DataFrame. When it comes to the training part, you may have the problem: How can I convert my Spark DataFrame to some format recognized by my TensorFlow model?

The existing data conversion process can be tedious. For example, to convert an Apache Spark DataFrame to a TensorFlow Dataset file format, you need to either save the Apache Spark DataFrame on a distributed filesystem in parquet format and load the converted data with third-party tools such as Petastorm, or save it directly in TFRecord files with spark-tensorflow-connector and load it back using TFRecordDataset. Both approaches take more than 20 lines of code to manage the intermediate data files, rely on different parsing syntax, and require extra attention for handling vector columns in the Spark DataFrames. In short, all these engineering frictions greatly reduced the data scientists’ productivity.

The Databricks Machine Learning team contributed a new Spark Dataset Converter API to Petastorm to simplify these tedious data conversion process steps. With the new API, it takes a few lines of code to convert a Spark DataFrame to a TensorFlow Dataset or a PyTorch DataLoader with default parameters.

In the talk, I will use an example to show how to use the Spark Dataset Converter to train a Tensorflow model and how simple it is to go from single-node training to distributed training on Databricks.

In this talk, I would like to introduce an open-source tool built by our team that simplifies the data conversion from Apache Spark to deep learning frameworks.

Imagine you have a large dataset, say 20 GBs, and you want to use it to train a TensorFlow model. Before feeding the data to the model, you need to clean and preprocess your data using Spark. Now you have your dataset in a Spark DataFrame. When it comes to the training part, you may have the problem: How can I convert my Spark DataFrame to some format recognized by my TensorFlow model?

The existing data conversion process can be tedious. For example, to convert an Apache Spark DataFrame to a TensorFlow Dataset file format, you need to either save the Apache Spark DataFrame on a distributed filesystem in parquet format and load the converted data with third-party tools such as Petastorm, or save it directly in TFRecord files with spark-tensorflow-connector and load it back using TFRecordDataset. Both approaches take more than 20 lines of code to manage the intermediate data files, rely on different parsing syntax, and require extra attention for handling vector columns in the Spark DataFrames. In short, all these engineering frictions greatly reduced the data scientists’ productivity.

The Databricks Machine Learning team contributed a new Spark Dataset Converter API to Petastorm to simplify these tedious data conversion process steps. With the new API, it takes a few lines of code to convert a Spark DataFrame to a TensorFlow Dataset or a PyTorch DataLoader with default parameters.

In the talk, I will use an example to show how to use the Spark Dataset Converter to train a Tensorflow model and how simple it is to go from single-node training to distributed training on Databricks.

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a Simplify Data Conversion from Spark to TensorFlow and PyTorch (20)

Anúncio

Mais de Databricks (20)

Mais recentes (20)

Anúncio

Simplify Data Conversion from Spark to TensorFlow and PyTorch

  1. 1. Simplify Data Conversion from Spark to Deep Learning Liang Zhang Software Engineer @ databricks
  2. 2. About Me ▪ Machine Learning Team @ Databricks ▪ Master in Carnegie Mellon University Liang Zhang linkedin.com/in/liangz1/
  3. 3. Agenda ▪ Why should we care about data conversion between spark and deep learning frameworks? ▪ Pain points ▪ Overview of the Spark Dataset Converter ▪ Demo ▪ Best Practices
  4. 4. Spark DataFrame Motivation: Data Conversion from Spark to DL TensorFlow PyTorch ? • Images from driving camera: Detect traffic lights • Large amount of data - TBs • New images arriving every day • Data cleaning and labeling • Train the model with all available data and periodically re-train with new data • Predict the label of new images
  5. 5. Pain points: Data Conversion from Spark to Deep Learning frameworks
  6. 6. Pain points: Data Conversion from Spark to DL • Single-node training: • Collect a sample of data to the driver in a pandas DataFrame • Distributed training: • Save the Spark DataFrame to TFRecords files and load TFRecords using TensorFlow • Save the Spark DataFrame to parquet files and write your custom PyTorch DataLoader to load the partitions
  7. 7. Pain points: Data Conversion from Spark to DL • Single-node training: • Collect a sample of data to the driver in a pandas DataFrame • Distributed training: • Save the Spark DataFrame to TFRecords files and parse the serialized data in TFRecords using TensorFlow • Save the Spark DataFrame to parquet files and write your custom PyTorch DataLoader to load the partitions • Hard to migrate from single-node to distributed training • Many lines of extra code to save, load and parse intermediate files
  8. 8. Overview of the Spark Dataset Converter
  9. 9. Spark DataFrame Spark Dataset Converter API Overview TensorFlow Dataset PyTorch DataLoader Spark Dataset Converter from petastorm.spark import make_spark_converter converter = make_spark_converter(df) with converter.make_tf_dataset() as dataset: tf_model.fit(dataset) with converter.make_torch_dataloader() as dataloader: train(torch_model, dataloader)
  10. 10. Spark Dataset Converter API HDFS/DBFS Spark DataFrame tf.data.Dataset / torch.dataloader Found cached parquet file? Cache DataFrame in parquet file data.parquet No Yes Load cached parquet file with petastorm ETL Training
  11. 11. Spark Dataset Converter Features ▪ Recognize cached Spark DataFrame by checking the analyzed query plan ▪ Automatic cache cleaning at program exit • Change two arguments to migrate your data loading code from single-node to distributed setting • Easy migration to distributed • Cache intermediate files • Convert MLlib vectors to 1D arrays automatically • MLlib vector Handling
  12. 12. How to use the Spark Dataset Converter API? (demo)
  13. 13. Demo notebooks • Image Classification • Spark to TensorFlow Dataset • https://docs.databricks.com/_static/notebooks/deep-learning/petastorm-spark-converter-tenso rflow.html • Spark to PyTorch DataLoader • https://docs.databricks.com/_static/notebooks/deep-learning/petastorm-spark-converter-pytor ch.html
  14. 14. Best Practices
  15. 15. Best Practices with Spark Dataset Converter • Image data decoding and preprocessing • Decode image bytes and preprocess in TransformSpec, not in Spark • Spark -> TransformSpec -> Dataset.map -> in the model (GPU) • Generate infinite batches using num_epochs=None • In distributed training, to guarantee that every worker get exactly the same amount of data. • Manage the lifecycle of cache data • On local laptop or in a scheduled job on Databricks, the cache files will be automatically deleted when the python process exits. • In Databricks notebook, we recommend configuring lifecycle rules for the underlying S3 buckets storing the cache files.
  16. 16. Feedback Your feedback is important to us. Don’t forget to rate and review the sessions.

×