Get startedGet started for free

Serving a model

Model deployment is another important step of the ML Lifecycle. The MLflow command line interface includes a command for serving models. Models can be deployed with MLflow from the local filesystem, from MLflow Tracking, and from several cloud providers such as AWS S3.

To serve a model from MLflow Tracking using its run_id, which of the following commands is used to serve the model?

This exercise is part of the course

Introduction to MLflow

View Course

Hands-on interactive exercise

Turn theory into action with one of our interactive exercises

Start Exercise