Model metrics allow the data scientist to determine if the program goals have been achieved. DKube supports MLFlow-based metric collection through an SDK. A few simple lines of code allow the metrics to be saved for display and compare.
The chosen metrics can be visualized in real-time through a tabular and flexible graph-based screen. Once the runs have been completed, the model metrics can be compared to understand the impact of different inputs to the trained model.