avatarMohitverma

Summary

This article, the third part of a four-part series on MLOps, focuses on using the MLflow server UI for tracking experiment parameters, code versions, metrics, and output files, as well as model registration, following the setup and execution of a Kubeflow pipeline.

Abstract

The article is the third installment in a comprehensive MLOps series that guides readers through the process of implementing MLOps practices using Kubeflow Pipeline V2, MLflow, and Seldon Core. It specifically delves into the functionality of the MLflow server UI, emphasizing how users can log and review various aspects of their machine learning experiments, including parameters, code versions, metrics, and output files. The article also explains how to register models with their version details and how to use Minio S3 for artifact storage, which was set up in the first part of the series. The upcoming fourth part promises to cover the deployment of these models using Seldon Core CRD.

Opinions

  • The MLflow server UI is presented as a user-friendly and effective tool for experiment tracking and model registration within the MLOps workflow.
  • The use of Minio S3 as an artifact root storage solution is highlighted as a practical approach for storing and accessing registered artifacts.
  • The article implies that the integration of MLflow with Kubeflow Pipeline V2 and Seldon Core provides a robust framework for end-to-end MLOps processes.
  • The ability to compare different runs within a single experiment is showcased as a valuable feature of the MLflow server UI for analyzing and improving model performance.

MLOps with Kubeflow-pipeline V2, mlflow, Seldon Core : Part3

This is third part of the four parts MLOps series.

Part 1: Introduction to the basic concepts and installation on local system.

Part 2: Understanding the kubeflow pipeline and components.

Part 3: Understanding the Mlflow server UI for logging parameters, code versions, metrics, and output files.

Part 4: Deploying model with Seldon core server over kubernetes.

After the successful run of the pipeline , you can visit the mlflow server url to check the home screen for the experiment tracking and model registration details.

mlflow server experiments

The details of the experiments can be checked by clicking the run name in the experiment details. The mlflow server has successfully logged the model parameters, metrics and the model artifacts which were specified to be logged during kubeflow pipeline run .

mlflow server experiment details

The mlflow server provides option to compare the two runs as show in the picture below.

comparing 2 runs in single experiment
comparing 2 runs in single experiment

For each successful run of the pipeline there will be a model registered with the version details in the Models tab.

model version logged by mlflow server

As we have provided artifact root storage to be Minio S3 in the mlflow server deployment in part 1 of this series . We can log in to the minio S3 to check the registered artifacts in the bucket.

mlflow experiments artifact store

In the next article we will see how to deploy this model using seldon core crd.

Mlflow
Mlops
Recommended from ReadMedium
avatarVIKRANT SINGH
LLMOPs vs MLOPS

Introduction:

3 min read