Speakers
Description
Ever wondered how the machine learning models you experiment with on your laptop can go live for the world to use? This hands-on workshop will take you on a journey from a simple notebook to a fully deployed model in just one hour. We'll start small, training a prebuilt ML model right on your Ubuntu machine, and then scale out, containerizing your model and deploying it on MicroK8s. Finally, we'll reflect on how these techniques fit into real-world MLOps workflows, giving you practical strategies for scaling, monitoring, and integrating with open-source tools. By the end, you’ll have taken your model from code to cloud—and seen how Ubuntu makes it all possible.
Learning Outcomes:
Learn to set up a reproducible ML development environment on Ubuntu.
Train a small ML model (image classifier or sentiment analysis) from scratch.
Containerize your model with Docker or Podman, and deploy it on a lightweight Kubernetes cluster.
Expose a REST API endpoint to interact with your model.
Gain insight into real-world MLOps practices, including scaling strategies, monitoring, and workflow integration.
Understand how the same containerized model can be extended to edge devices or multi-cloud deployments.
Audience Takeaways:
Participants will leave with a practical, end-to-end understanding of ML model deployment, from local training to cloud-ready infrastructure—plus the confidence to replicate this workflow for their own projects. Ubuntu becomes your trusted launchpad for turning notebooks into deployable services.