Comparing Models

The central focus of an MLOps platform, and the whole reason for almost everything else in the development process, is to compare different models to understand how various inputs impact the quality of the model, and to determine which ones best achieve the program goals - based on the chosen metrics.

DKube addresses this fundamental decision-making capability in a powerful, flexible, and intuitive manner through its combination of Kubeflow and MLFlow.

The process starts with how the models are saved and organized. When submitting a training run, a model can be saved and viewed as a version of an existing set of models, or it can be designated to be a new model. This allows the data scientist to provide some structure to a process that can involve hundreds of different runs by grouping them into a manageable organization.

Learn more about Versioning and Tracking

Once several training runs have been completed, the resulting models can be compared by selecting them from a list. Models can be compared based on the versions of the same model, or from versions of different models. It is completely flexible. You can even compare models that were created by other users in the same group.

The comparison is provided in a tabular display, and in a number of flexible graphical displays. You can choose the metrics that you want to compare, and the timeline that they should be compared against. This includes an X/Y format, or a more sophisticated graph such as a scatter plot, a contour plot, or parallel coordinates plot.


Once the comparison is complete, decisions can be made about what to do next, all within the integrated MLOps interface and workflow. One or more of the models can be chosen for possible deployment, or a new run can be cloned from one of the existing runs based on the metric analysis.


Learn more about the DKube MLOps platform