Anúncio
Anúncio

Mais conteúdo relacionado

Apresentações para você(20)

Similar a Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycle for On-Prem or in the Cloud(20)

Anúncio

Mais de DataWorks Summit(20)

Anúncio

Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycle for On-Prem or in the Cloud

  1. Alex Zeltov An Open Source Platform for the Machine Learning Lifecycle for On-Prem or in the Cloud Introductionto Ml
  2. Alex Zeltov Big Data Solutions Architect / AI Engineer Background: • Sr. Solutions Architect - part of a Global Black Belt Team Big Data & AI in Microsoft • Sr. Solutions Engineer at Hortonworks specializing in HDP and HDF • Research Scientist at Independent Blue Cross – BigData and ML • Sr. Software Engineer at Oracle
  3. Machine Learning E2E Development is Complex!!! Typical E2E Process … Prepare Experiment Deploy Orchestrate Result: It is difficult to productionize and share.
  4. Motivation: the Deployment Process 5 Data Engineer Dev Ops Pickled Model What’sPickle? Data Scientist
  5. ML Development Challenges • 100s of software tools to leverage • Hard to track & reproduce results: code, data, params, etc • Hard to productionize models • Needs large scale for best results
  6. Custom MLPlatforms Facebook FBLearner,Uber Michelangelo, Google TFX +Standardize the data prep /training / deployloop: if you work with theplatform,you get these! –Limited to a few algorithms orframeworks –Tied to one company’sinfrastructure –Out of luck if you left the company…. Can we provide similar benefits in an open manner?
  7. • Open source platform for the machine learning lifecycle. • API First:  Allow submitting runs, models, works with any ML library & language  MlFlow APIs are available for the Python, R and Java  REST API • Runs the same way anywhere: on-prem or any cloud • > 3,700 stars on GitHub , 92 contributors from > 40 companies. > 200 companies are now using MlFlow* Introductionto Ml * https://www.oreilly.com/ideas/specialized-tools-for-machine-learning-development-and-model-governance-are-becoming-essential
  8. MlFlowComponents Tracking Record and query experiments: code, data, config, results Projects Packaging format for reproducibleruns onany platform Models General modelformat that supports diverse deploymenttools distinct components: use different components individually based on your needs
  9. Tracking Experiments with MlFlowTracking Server MLflow Tracking is… • a logging API specific for machine learning • agnostic to libraries and environments that do the training • organized around the concept of runs, which are executions of data science code • runs are aggregated into experiments where many runs can be a part of a given experiment • An MLflow server can host many experiments
  10. MlFlowTracking Server Azure Machine Learning DatabricksIaaS CloudOn-Premise pip install mlflow mlflow.set_tracking_uri(URI) CDH / HDP
  11. Experiments inTracking Server Parameters: key-value inputs to your code Metrics: numeric values (can update over time) Artifacts: arbitrary files, including models Tags/Notes: info about a run Source: what code ran Version: git version import mlflow # log model’s tuning parameters with mlflow.start_run(): mlflow.log_param("layers", layers) mlflow.log_param("alpha", alpha) # log model’s metrics mlflow.log_metric("mse", model.mse()) mlflow.log_artifact("plot", model.plot(test_df)) mlflow.tensorflow.log_model(model)
  12. Experiments Tracking API Record and query experiments: code, configs, results,etc
  13. Demo MlFlow Tracking Server
  14. MlFlow Projects There are a number of different reasons why teams need to package their machine learning projects: • Projects have various library dependencies • shipping a machine learning solution involves the environment in which it was built • MLflow allows for this environment to be a conda environment or docker container • This means that teams can easily share and publish their code for others to use • Machine learning projects become increasingly complex as time goes on • This includes ETL and featurization steps, machine learning models used for pre- processing, and finally the model training itself • Each component of a machine learning pipeline needs to allow for tracing its lineage • If there's a failure at some point, tracing the full end-to-end lineage of a model allows for easier debugging.
  15. MlFlow Projects Project Spec Code DataConfig Local Execution Remote Execution
  16. Example MlFlowProject my_project/ ├── MLproject │ │ │ │ │ ├── conda.yaml ├── main.py └── model.py ... conda_env: conda.yaml entry_points: main: parameters: training_data: path lambda: {type: float, default: 0.1} command: python main.py {training_data} {lambda} $ mlflow run ml-production/mlflow-model-training/ -P data_path=airbnb-cleaned-mlflow.csv $ mlflow run git://https://github.com/mlflow/mlflow-example mlflow.run( uri="https://github.com/mlflow/mlflow-example", parameters={'alpha':0.4} )
  17. Demo MlFlow Project
  18. MlFlow Models • Once a model has been trained and bundled with the environment it was trained in. The next step is to package the model so that it can be used by a variety of serving tools • Current deployment options include: • Container-based REST servers • Continuous deployment using Spark streaming • Batch • Managed cloud platforms such as Azure ML and AWS SageMaker • Packaging the final model in a platform-agnostic way offers the most flexibility in deployment options and allows for model reuse across a number of platforms.
  19. MlFlow Models MLFrameworks InferenceCode Batch &Stream Scoring Serving ToolsStandard for MLmodels ModelFormat mlflow.pyfunc mlflow.h2o mlflow.keras mlflow.pytorch mlflow.sklearn mlflow.spark mlflow.tensorflow
  20. Example MlFlow Model my_model/ ├── MLmodel │ │ │ │ │ └── estimator/ ├── saved_model.pb └── variables/ ... Usable by tools thatunderstand TensorFlowmodel format Usable by any tool that can run Python (Docker,Spark, etc!) run_id: 769915006efd4c4bbd662461 time_created: 2018-06-28T12:34 flavors: tensorflow: saved_model_dir: estimator signature_def_key: predict python_function: loader_module: mlflow.tensorflow >>> mlflow.tensorflow.log_model(...)
  21. Demo MlFlow Model
  22. MlFlow + Azure Machine Learning + Azure Databricks https://www.zdnet.com/article/microsoft-to-join-mlflow- project-add-native-support-to-azure-machine-learning/
  23. What is Azure Machine Learning service? Set of Azure Cloud Services Python SDK  Prepare Data  Build Models  Train Models  Manage Models  Track Experiments  Deploy Models That enables you to:
  24. Machine Learning on Azure Domain specific pretrained models To reduce time to market Azure Databricks Machine Learning VMs Popular frameworks To build advanced deep learning solutions TensorFlowPytorch Onnx Azure Machine Learning LanguageSpeech … SearchVision Productive services To empower data science and development teams Powerful infrastructure To accelerate deep learning Scikit-Learn PyCharm Jupyter Familiar Data Science tools To simplify model development Visual Studio Code Command line CPU GPU FPGA From the Intelligent Cloud to the Intelligent Edge
  25. Azure ML service Key Artifacts Workspace
  26. Azure ML: How to deploy models at scale
  27. Demo MlFlow + AML + Azure Databricks https://eastus2.azuredatabricks.net/?o=3336252523001260#n otebook/2623556200920093/command/2623556200920094
  28. Conclusion + Q & A • MlFlow can greatly simplify the ML lifecycle • Simplify lifecycle development • Lightweight, open platform that integrates easily • Available APIs: Python, Java &R • Easy to install and use • Develop locally and track locally or remotely • Deploy locally, cloud, on premise… • Visualize experiments
  29. Learning More AboutMlFlow • pip install mlflow to get started • Find docs &examples atmlflow.org • https://github.com/mlflow/mlflow • tinyurl.com/mlflow-slack • https://docs.azuredatabricks.net/applications/mlflow/quick- start.html

Notas do Editor

  1. The Machine Learning Lifecycle Challenges Building and deploying a machine-learning model can be difficult to accomplish. Enabling other data scientists – or even yourself – to reproduce your pipeline is equally challenging. Moreover, doing so can impact your data science team’s productivity, leading to a significant waste of time and resources. How many times have you or your peers had to discard previous work because it was either not documented properly or, perhaps, too difficult to replicate? Getting models up to speed in the first place is significant enough that it can be easy to overlook long-term management. What does this involve in practice? In essence, we have to compare the results of different versions of ML models to track what’s running where, and to redeploy and rollback updated models as needed. Each of these requires its own specific tools, and it’s these changes that make the ML lifecycle so challenging compared to traditional software development lifecycle (SDLC) management. The Diversity and Number of ML Tools Involved While the traditional software-development process leads to the rationalization and governance of tools and platforms used for developing and managing applications, the ML lifecycle relies on data scientists’ ability to use multiple tools, whether for preparing data and training models, or deploying them for production use. Data scientists will seek the latest algorithms from the most up-to-date ML libraries and frameworks available to compare results and improve performance.
  2. This represents a serious shift, and challenges compare to those of a more traditional software-development lifecycle, for the following reasons: The diversity and number of ML tools involved, coupled with a lack of standardization across ML libraries and frameworks The continuous nature of ML development, coupled with a lack of tracking and management tools for machine learning models and experiments The complexity of productionizing ML models, due to the lack of integration between data pipelines, ML environments, and production services
  3. Just by adding a few lines of code in the function or script that trains their model, data scientists can log parameters, metrics, artifacts (plots, miscellaneous files, etc.) and a deployable packaging of the ML model. Every time that function or script is run, the results will be logged automatically as a byproduct of those lines of code being added, even if the party doing the training run makes no special effort to record the results. MLflow application programming interfaces (APIs) are available for the Python, R and Java programming languages, and MLflow sports a language-agnostic REST API as well. Over a relatively short time period, MLflow has garnered more than 3,300 stars on GitHub , almost 500,000 monthly downloads and 80 contributors from more than 40 companies. Most significantly, more than 200 companies are now using MLflow.
  4. Jupiter: Demo 02 - Experiment Tracking - MlFlow Demo Dataworks Summit Demo 03: CLI Demo : 03 Packaging MlFlow CLI: cd ~/git/mlflowdemo/ python mlflow_exp_tracking.py --n_estimators 8 --max_depth 15 --run_name 'alex 05_14'
  5. Jupiter: Demo 02 - Experiment Tracking - MlFlow Demo Dataworks Summit Demo 03: CLI Demo : 03 Packaging MlFlow CLI: cd ~/git/mlflowdemo/ python mlflow_exp_tracking.py --n_estimators 8 --max_depth 15 --run_name 'alex 05_14'
  6. http://localhost:8888/notebooks/git/mlflowdemo/03%20Packaging%20MlFlow.ipynb
  7. http://localhost:8888/notebooks/git/mlflowdemo/03%20Packaging%20MlFlow.ipynb
  8. https://mlflow.org/docs/latest/python_api/mlflow.pyfunc.html#module-mlflow.pyfunc
  9. http://localhost:8888/notebooks/git/mlflowdemo/03%20Packaging%20MlFlow.ipynb
  10. https://www.zdnet.com/article/microsoft-to-join-mlflow-project-add-native-support-to-azure-machine-learning/
  11. https://eastus2.azuredatabricks.net/?o=3336252523001260#notebook/2623556200920093/command/2623556200920094
Anúncio