O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Shift Remote AI: Build and deploy PyTorch Models with Azure Machine Learning - Henk Boelman (Microsoft)

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio

Confira estes a seguir

1 de 37 Anúncio

Shift Remote AI: Build and deploy PyTorch Models with Azure Machine Learning - Henk Boelman (Microsoft)

Baixar para ler offline

Take a deep look into Azure Machine Learning, a cloud service that helps you build, train, deploy, and manage models. Walk through the data science process and then have some fun creating a ML recognition model based on the Simpsons cartoon with PyTorch. You'll leave this session with a better grasp of the technological components of Azure Machine Learning services.

Take a deep look into Azure Machine Learning, a cloud service that helps you build, train, deploy, and manage models. Walk through the data science process and then have some fun creating a ML recognition model based on the Simpsons cartoon with PyTorch. You'll leave this session with a better grasp of the technological components of Azure Machine Learning services.

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (19)

Semelhante a Shift Remote AI: Build and deploy PyTorch Models with Azure Machine Learning - Henk Boelman (Microsoft) (20)

Anúncio

Mais de Shift Conference (20)

Mais recentes (20)

Anúncio

Shift Remote AI: Build and deploy PyTorch Models with Azure Machine Learning - Henk Boelman (Microsoft)

  1. 1. Henk Boelman Cloud Advocate @ Microsoft Build and Deploy PyTorch Models with Azure Machine Learning HenkBoelman.com @hboelman
  2. 2. Azure Machine Learning studio A fully-managed cloud service that enables you to easily build, deploy, and share predictive analytics solutions.
  3. 3. What is Azure Machine Learning? Set of Azure Cloud Services Python SDK Azure CLI Web interface ✓Prepare Data ✓Build Models ✓Train Models ✓Manage Models ✓Track Experiments ✓Deploy Models That enables you to:
  4. 4. Is it Marge or Homer or .. or ..?
  5. 5. Simpson Lego dataset https://github.com/hnky/dataset-lego-figures
  6. 6. Sophisticated pretrained models To simplify solution development Azure Databricks Machine Learning VMs Popular frameworks To build advanced deep learning solutions TensorFlow KerasPytorch Onnx Azure Machine Learning LanguageSpeech … Azure Search Vision On-premises Cloud Edge Productive services To empower data science and development teams Powerful infrastructure To accelerate deep learning Flexible deployment To deploy and manage models on intelligent cloud and edge Machine Learning on Azure Cognitive Services
  7. 7. Prepare your environment Experiment with your model & data Deploy Your model into production
  8. 8. Step 1: Prepare your environment
  9. 9. Setup your environment VS Code Python SDKAzure Portal
  10. 10. Create a workspace ws = Workspace.create( name='<NAME>', subscription_id='<SUBSCRIPTION ID>', resource_group='<RESOURCE GROUP>', location='westeurope') ws.write_config() ws = Workspace.from_config() Create a workspace
  11. 11. Datasets – registered, known data sets Experiments – Training runs Models – Registered, versioned models Endpoints: Real-time Endpoints – Deployed model endpoints Pipeline Endpoints – Training workflows Compute – Managed compute Datastores – Connections to data Azure Machine Learning Service
  12. 12. Create Compute cfg = AmlCompute.provisioning_configuration( vm_size='STANDARD_NC6', min_nodes=1, max_nodes=6) cc = ComputeTarget.create(ws, '<NAME>', cfg) Create a workspace Create compute
  13. 13. Demo: Setup your workspace Create a workspace Create compute Setup storage
  14. 14. Step 1 Prepare your environment Create a workspace Create compute Setup storage
  15. 15. Step 2: Experiment with your model & data
  16. 16. Create an experiment exp = Experiment(workspace=ws, name=“<ExperimentName>”) Create an Experiment
  17. 17. Create a training file Create an Experiment Create a training file
  18. 18. Create an estimator params = {'--data-folder': ws.get_default_datastore().as_mount()} estimator = TensorFlow( source_directory = script_folder, script_params = params, compute_target = computeCluster, entry_script = 'train.py’, use_gpu = True, conda_packages = ['scikit-learn','keras','opencv’], framework_version='1.10') Create an Experiment Create a training file Create an estimator
  19. 19. Submit the experiment to the cluster run = exp.submit(estimator) RunDetails(run).show() Create an Experiment Create a training file Submit to the AI cluster Create an estimator
  20. 20. Create an Experiment Create a training file Submit to the AI cluster Create an estimator Demo: Creating and run an experiment
  21. 21. Azure Notebook Compute Target Experiment Docker Image Data store 1. Snapshot folder and send to experiment 2. create docker image 3. Deploy docker and snapshot to compute 4. Mount datastore to compute 6. Stream stdout, logs, metrics 5. Launch the script 7. Copy over outputs
  22. 22. Register the model model = run.register_model( model_name='SimpsonsAI', model_path='outputs') Create an Experiment Create a training file Submit to the AI cluster Create an estimator Register the model
  23. 23. Create an Experiment Create a training file Submit to the AI cluster Create an estimator Register the model Demo: Register and test the model
  24. 24. Step 2 Experiment with your model & data
  25. 25. Step 3: Deploy your model into production
  26. 26. AMLS to deploy The Model Score.py Environment file Docker Image
  27. 27. Score.py %%writefile score.py from azureml.core.model import Model def init(): model_root = Model.get_model_path('MyModel’) loaded_model = model_from_json(loaded_model_json) loaded_model.load_weights(model_file_h5) def run(raw_data): url = json.loads(raw_data)['url’] image_data = cv2.resize(image_data,(96,96)) predicted_labels = loaded_model.predict(data1) return json.dumps(predicted_labels)
  28. 28. Environment File from azureml.core.runconfig import CondaDependencies cd = CondaDependencies.create() cd.add_conda_package('keras==2.2.2') cd.add_conda_package('opencv') cd.add_tensorflow_conda_package() cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')
  29. 29. Inference config inference_config = InferenceConfig( runtime= "python", entry_script="score.py", conda_file="myenv.yml" )
  30. 30. Deployment using AMLS
  31. 31. Deploy to ACI aciconfig = AciWebservice.deploy_configuration( cpu_cores = 1, memory_gb = 2) service = Model.deploy(workspace=ws, name='simpsons-aci', models=[model], inference_config=inference_config, deployment_config=aciconfig)
  32. 32. Demo: Deploy to ACI
  33. 33. @hboelman github.com/hnky henkboelman.com Thank you! Read more on: henkboelman.com

×