Deploying and managing machine learning models can be a daunting task, but DKube makes it easier. Convert your trained models a format that can be easily consumed by other systems such as a web service or API. DKube handles all of the infrastructure setup so you can focus on what you do best - building models.
DKube takes care of scalability, security, and monitoring so your models make accurate predictions and respond to requests quickly, even under high traffic conditions.
Once you’ve optimized your code, data, and model workflow, the next step is to automate the process. Dkube provides several flexible mechanisms for automating your workflow.
DKube supports Kubeflow Pipelines natively. This provides a graphical method to view and execute a predefined set of steps. These can be setup, preprocessing, training, serving, or any action necessary to achieve the required results. The pipeline can take inputs to use for the execution.
The automation can even be triggered when the GitHub repo is updated. A set of steps can be executed when the code is changed, which can build an image, start a job, or run a pipeline.
After the model has been identified that best matches your target goals, it can be published to the Model Catalog to identify it as a candidate for production. The Production Engineer will review the models in the catalog, and deploy the appropriate version using the standard Kubeflow serving framework: KFServing.
The serving image can optionally include preprocessing code that gets executed before the inference, and postprocessing code that can manage the output prior to sending the results to a client.
The served image can be monitored to ensure efficient execution through a dashboard.Once a model has been deployed at an endpoint, the served model can be changed at the selected endpoint. This allows an easy migration from one version of a model to a different one.