MLOps with Kubeflow-pipeline V2, mlflow, Seldon Core : Part3
This is third part of the four parts MLOps series.
Part 1: Introduction to the basic concepts and installation on local system.
Part 2: Understanding the kubeflow pipeline and components.
Part 3: Understanding the Mlflow server UI for logging parameters, code versions, metrics, and output files.
Part 4: Deploying model with Seldon core server over kubernetes.
After the successful run of the pipeline , you can visit the mlflow server url to check the home screen for the experiment tracking and model registration details.
The details of the experiments can be checked by clicking the run name in the experiment details. The mlflow server has successfully logged the model parameters, metrics and the model artifacts which were specified to be logged during kubeflow pipeline run .
The mlflow server provides option to compare the two runs as show in the picture below.
For each successful run of the pipeline there will be a model registered with the version details in the Models tab.
As we have provided artifact root storage to be Minio S3 in the mlflow server deployment in part 1 of this series . We can log in to the minio S3 to check the registered artifacts in the bucket.
In the next article we will see how to deploy this model using seldon core crd.