Collecting metrics is important only to the extent that they can be used to determine how well the trained model does its job. DKube provides a rich set of MLFlow-based display options, automatically invoked from a list of runs or models.
Metrics are available in real-time as the run progresses, allowing the data scientist to follow its progress. This becomes important for runs that are complex, or are operating on large datasets, since they can run for days or weeks, taking up both time and resources.
Once the run is complete, the stored metrics are available for display from both the run and model screens. The metrics are available in both tabular and graphical format, giving the flexibility to view them in the most appropriate form.
The graphical display is flexible and intuitive. You can choose the metrics that you want to view, and the timeline that is of interest. And this is all available from within the full MLOps point and click UI-based interface and workflow. Since the metrics are saved as part of the powerful DKube versioning capability, getting access to the metric for a training run is as simple as choosing it from a list of completed runs.