Reproducibility is the foundational principle of the scientific method. If an experiment cannot be repeated, it is assumed to be faulty. DKube, an end-to-end Kubeflow-based MLOps platform, offers complete reproducibility into an integrated workflow. Without the ability to trace and repeat your work, it is not science
The ability to track and reproduce your data is critical throughout the ML/DL process.
Table to be inserted
Reproducibility has some important aspects:
The overall workflow for developing a model can be summarized by the following general phases:
These phases can be combined in different ways depending upon the size and formality of the organization, but the basic approach is similar for most data science projects.
The ML Engineer phase is where reproducibility is the most valuable. The basic training code has been developed, and the entire environment needs to be optimized to address inference on real data.
With all those variables in play, the number of training runs and models can become large, and it is somewhere between challenging and impossible to analyze what option caused which outcome without some assistance from the platform.
DKube, an end-to-end MLOps platform, provides all this assistance automatically, and it is fully integrated into your workflow. DKube is based on Kubeflow, a standards-based platform that brings together best-in-class frameworks and systems. DKube extends this baseline to provide an integrated & supported DL/ML platform.
The first step in bringing order to this chaos is to use versioning when creating new models. When a training run is executed, the output can be either a new version of an existing model, or it can be an entirely new model.
Versioning is a way to combine models that have some common heritage, but with a limited number of differences in the input. For example, you might want to see how different hyperparameters impact your selected metrics. This can be used to compare the metrics to determine the best fit.
In this example based on DKube, the model lineage is shown after a training run. The input code and datasets are provided, along with any additional hyperparameters, and the training run is identified.
Navigating to the associated code, dataset, or run is accomplished by selecting it directly from the lineage box. From this screen, you can:
Creating a new training run with different data or hyperparameters is direct and simple. You can access the run from the lineage screen, and clone it right from there
By tracing back the lineage to the program or dataset, you can also see where else that code or dataset was used. This provides insight into how broadly your inputs are being selected for training. You want to ensure that you’re not using the same dataset over and over, for example, which might overfit your training to a specific dataset.
Finally, once you have a workflow established, DKube enables flexible and powerful automation through Kubeflow Pipelines or CI/CD.
DKube enables best-in-class components to be brought together for your experiments and training. And it allows data scientists to focus on the science.
Want to learn how to monitor your models in production? The DKube platform integrates model monitoring into the overall system with DKube Monitor. It includes everything necessary for engineers and executives to identify how well your models are achieving their business goals - and facilitates a smooth workflow to improve them when necessary.
How to launch hyperparameter tuning runs taking advantage of the Katib based hyperparameter tuning available in Kubeflow. Pick the best model from the multiple training runs as the winner based on pre-set criteria.
The Duchess of Windsor famously said that you could not be too rich or too thin. And whether or not that is correct, a similar observation is definitely true when trying to match deep learning applications and compute resources: you cannot have enough horsepower.