Neueda
Shape

Course

MLOps Engineering On AWS

 

Could your Machine Learning (ML) workflow use some DevOps agility? MLOps Engineering on AWS will help you bring DevOps-style practices into the building, training, and deployment of ML models. ML data platform engineers, DevOps engineers, and developers/operations staff with responsibility for operationalizing ML models will learn to address the challenges associated with handoffs between data engineers, data scientists, software developers, and operations through the use of tools, automation, processes, and teamwork. By the end of the course, go from learning to doing by building an MLOps action plan for your organization.

Duration: 3 days

Who is it for: This course is for ML data platform engineers, devops engineers and developers/operations staff with responsibility for operationalizing ML models.

Layout: This course includes presentations, labs, demonstrations, workbooks, and group exercises.

Objectives

  • Describe machine learning operations
  • Understand the key differences between DevOps and MLOps
  • Describe the machine learning workflow
  • Discuss the importance of communications in MLOps
  • Explain end-to-end options for automation of ML workflows
  • List key Amazon SageMaker features for MLOps automation
  • Build an automated ML process that builds, trains, tests, and deploys models
  • Build an automated ML process that retrains the model based on change(s) to the model code
  • Identify elements and important steps in the deployment process
  • Describe items that might be included in a model package, and their use in training or inference
  • Recognize Amazon SageMaker options for selecting models for deployment, including support for ML frameworks and built-in algorithms or bring-your-own-models
  • Differentiate scaling in machine learning from scaling in other applications
  • Determine when to use different approaches to inference
  • Discuss deployment strategies, benefits, challenges, and typical use cases
  • Describe the challenges when deploying machine learning to edge devices
  • Recognize important Amazon SageMaker features that are relevant to deployment and inference
  • Describe why monitoring is important
  • Detect data drifts in the underlying input data
  • Demonstrate how to monitor ML models for bias
  • Explain how to monitor model resource consumption and latency
  • Discuss how to integrate human-in-the-loop reviews of model results in production

Modules

Introduction to MLOps

  • Machine learning operations
  • Goals of MLOps
  • Communication
  • From DevOps to MLOps
  • ML workflow
  • Scope
  • MLOps view of ML workflow
  • MLOps cases

MLOps Development

  • Intro to build, train, and evaluate machine learning models
  • MLOps security
  • Automating
  • Apache Airflow
  • Kubernetes integration for MLOps
  • Amazon SageMaker for MLOps
  • Lab: Bring your own algorithm to an MLOps pipeline
  • Demonstration: Amazon SageMaker
  • Intro to build, train, and evaluate machine learning models
  • Lab: Code and serve your ML model with AWS CodeBuild
  • Activity: MLOps Action Plan Workbook

MLOps Deployment

  • Introduction to deployment operations
  • Model packaging

Inference

  • Lab: Deploy your model to production
  • SageMaker production variants
  • Deployment strategies
  • Deploying to the edge
  • Lab: Conduct A/B testing
  • Activity: MLOps Action Plan Workbook

Model Monitoring and Operations

  • Lab: Troubleshoot your pipeline
  • The importance of monitoring
  • Monitoring by design
  • Lab: Monitor your ML model
  • Human-in-the-loop
  • Amazon SageMaker Model Monitor
  • Demonstration: Amazon SageMaker Pipelines, Model Monitor, model registry, and Feature Store
  • Solving the Problem(s)
  • Activity: MLOps Action Plan Workbook

Wrap-up

  • Course review
  • Activity: MLOps Action Plan Workbook
  • Wrap-up

Enquire about this course

"*" indicates required fields

By submitting this form, you agree to our Privacy Policy.
This field is for validation purposes and should be left unchanged.

More courses