Learn how to integrate DKube with HPC/LSF clusters including configuring the initial set-up and scheduling pre-processing or training jobs including Kubeflow pipelines jobs and analyze results with MLFlow based model comparison metrics.
How to launch a single model training or data preprocessing run through the DKube UI while passing new parameters to pass to the training program via the UI and designated environment variables. Learn how to track the performance characteristics and the lineage of the run with the dataset and model versions used or created in the process.
The Duchess of Windsor famously said that you could not be too rich or too thin. And whether or not that is correct, a similar observation is definitely true when trying to match deep learning applications and compute resources: you cannot have enough horsepower.