Today we will use Azure Machine Learning Workspace. Here we will start by training our own model, testing and registering it and deploying the model on Azure Container Instance and Azure Kubernetes using Jupyter Notebooks, a popular tool for prototyping highly used by ml expert. In this case our goal as a ml expert is to provide a REST API with a trained model behind the scenes that can be consumed by an application.
Once we have seen the ml expert view we will switch to the Developer or DevOps side of things. For operationalization of the model we need DevOps in this case called Machine Learning Operations (MLOps). For this challenge we have prepared a .yaml
file which automatically deploys a Workspace, trains and registers a model and deploys it to Azure Container Instance and Azure Kubernetes Service. To sum it up, everything we have done manually will be automated for us later today.
Here are the top two resources you'll need today:
The challenges can be grouped into two paths:
- Challenges 1, 2 and 3 have a focus on running custom code on Azure Machine Learning, including experiment tracking, model deployment and setting up an MLOps pipeline using Github Actions.
- These challenges are recommended for developers and data scientists with experience using Python, Jupyter Notebooks and Github.
- Challenges 4, 5, 6 and 7 focus on using AutoML and Designer to quickly develop and deploy machine learning models with as little code as possible.
- These challenges are for machine learning beginners and developers that want to learn more about the basics of machine learning, develop their first model and publish it as an API for further use.
⚡ Let's go to challenge 1!
In this first challenge, you'll be training a basic machine learning model on Azure. We'll be using the popular MNIST dataset, as it allows us to focus on getting familiar with the mechanics of Azure Machine Learning. MNIST is a data set containing:
- 60000 hand-written digits as training data
- 10000 hand-written digits as testing data
Here are some examples:
The goal is to build a machine learning model, that
- takes an unseen image as an input (28x28 pixels) and
- outputs if there was a 0, 1, 2, 3, 4, 5, 6, 7, 8 or 9 in the image
Guidance:
- Deploy from Azure Portal:
Machine Learning service workspace
- Write your code in a Jupyter Notebook in a Compute VM and use the new Azure ML UI
- Use
Python 3.8 - AzureML
as the Notebook type in Jupyter - We'll be using scikit-learn to train a simple
LogisticRegression
classifier
⚡ Let's go to challenge 2!
In this challenge, you'll be deploying the model from challenge 1 as an Azure Container Instance, make the model available as a REST API and query it.
Guidance:
- Take the model from challenge 2 and containerize it (Azure ML will do most of that for us)
- Deploy it on ACI as a RESTful API
⚡ Let's go to challenge 3!
In this third challenge, you will use Github Actions together with the Azure Machine Learning service to create and run a workflow that automatically prepares data, trains and tests a model on it and runs a scoring script against it.
Guidance:
- Create a Service Principal
- Update setup.sh
- Run the Github Actions workflow
⚡ Let's go to challenge 4!
In this challenge, you'll be using Automated Machine Learning to let Azure figure out which Machine Learning algorithm performs best on our dataset. We'll fully leverage the Azure Portal for that, hence no coding needed!
Guidance:
- Create an
Automated Machine Learning
experiment in your Azure Machine Learning Workspace - Take the
pima-indians-diabetes.csv
dataset as the input - Let it figure out the best performing model
- Bonus points: Deploy the model to ACI as a scoring endpoint (takes just a few clicks)
⚡ Let's go to challenge 5!
In this challenge, you'll be training a model and deploying the model to showcase how you can use AutoML for a simple classification problem. This model will predict if a credit card transaction is considered a fraudulent charge. But this time we'll deploy the model to Azure Kubernetes Service (AKS). Since this will mimic a production deployment, we want to make sure to enable authentication and telemetry monitoring (using Application Insights) for our model!
Guidance:
- Create an
Automated Machine Learning
experiment, train, register and deploy the model via Python - Retrieve the scoring script and the environment file from the
Automated Machine Learning
experiment - Take the
creditcard.csv
dataset as the input - Create a AKS cluster
- Deploy the model with authentication and monitoring
⚡ Let's go to challenge 6!
In this challenge, you'll be using Azure Machine Learning Designer to define a machine learning pipeline using drag-and-drop functionality. No coding required, but this challenge will introduce you to some common concepts that are used to create machine learning pipelines.
The goal of this challenge is to create a machine learning model based on regression that will predict the price of a car based on its technical properties.
⚡ Let's go to challenge 7!
In this last challenge, you'll be using Azure Machine Learning Designer to deploy the model you built in challenge 6. Similarly to challenge 5, we'll deploy the model to Azure Kubernetes Service (AKS).
- Exam AI 102: Designing and Implementing a Microsoft Azure AI Solution - mostly focussed on AI, Machine Learning and in some cases also IoT as an use case
- Exam DP-100: Designing and Implementing a Data Science Solution on Azure - very Data Science focussed, requires general Machine Learning knowledge (methologies, algorithms, etc.)
- ⭐ Microsoft Learn - Machine Learning - great selection of short training units and exercises! ⭐
- ⭐ Azure Machine Learning Notebook Samples - this should solve 99% of your problems ⭐
- Automated Machine Learning Overview
- Hyperparameter Tuning
- Understand automated machine learning results
- Distributed Training with TensorFlow or Keras and PyTorch
- AI Tools for VS Code
- PyTorch Support for Azure ML
- Azure Machine Learning Pipelines
- MLOps with Azure ML
⚠ Lastly, we will use our Compute Instance
tomorrow in day 2, do not delete it today. To save costs, you should stop the Compute Instance over night. 😁
⚡ Let's go to AI Developer College Day 1 - Challenge 1!