(1) 408 430 2503      info@dkube.io

DKube Lite           Deep Learning Platform For Your WorkStation

  • Rapid AI prototyping on your own machine, in the office or in the cloud
  • Experiment with TensorFlow, JupyterLab, Kubeflow Pipelines and produce new models quickly
  • Ingest, pre-process, transform data
  • Run your models on your own machine or on a separate cluster

Hyper-parameter Optimization in DKube


Industrialization of AI

Industrialization of AI

What It can Learn from the Industrialization of Supermarkets.
Relying on Github or similar methods for code and data repos alone with Tensorflow, PyTorch or similar frameworks is not sufficient to scale AI projects and deployments.

Continue Reading



Integrated, intuitive machine learning/deep learning workflow enables users to focus on problem-solving rather than infrastructure

Solution will work out-of-the-box on supported platforms

Data Scientists working on models within 4 hours from the start of installation

Integrated Automation
Integrated Automation

Supports Kubeflow Pipelines for GUI-based automation

Provides Katib-based hyperparameter optimization

GitOps support through GitHub Actions

Kubeflow Pipelines
Based on Open Standards

Enables best-in-class components, including TensorFlow, Scikit-Learn, JupyterLab,Rstudio Katib & Kubeflow

Ability to rapidly add future frameworks & algorithms

Components integrated and validated together for robust & guaranteed operation

Deep Learning
Support for Popular Platforms

Operating system support for Ubuntu & CentOS

Cloud and on-premises operation

Wide variety of Kubernetes platforms: Open source, GKE, EKS, & Rancher

Authentication & Authorization

Secure authentication through GitHub

Access and privileges granted per user

Integrated support for most Git-based repositories, including GitHub & Bitbucket

Version Control & Compare

Integrated version control for code, data, & models

Secure storage through GitHub, Bitbucket, AWS S3, Minio, & GCS

DKube stores full metadata

Easily compare different model versions to achieve goals

CPU or GPU Workflow

Designed for CPU or GPU operation

GPUs are seamlessly allocated to users from a flexible pool, both within a server and between servers


Customizable containers for simple extensibility

One Convergence can tailor the solution to your specific needs

Complete Data Scientist Workflow
  • Integrates deep learning components for experimentation, training & inference
  • Dynamic administration of users, groups and resources
  • Intuitive workflow management
  • Support for the latest frameworks and algorithms
  • Simple, flexible version control
  • Collaboration among users
Deep Learning
Deep Learning
API Driven
Secure Collaboration
  • Secure policy-based collaboration mechanism
  • Dynamic allocation of users to groups
  • Unauthorized user access prevented
  • Users in groups share models, resources, & data
  • Flexible, granular access permissions
  • Heterogeneous, distributed pooling of resources across servers
Deep Learning
Mobility, Scalability & Affordability
  • Cloud Native solution
  • Built on open platforms
  • Range of off-the-shelf systems
  • Choose the best balance of performance, flexibility, and cost
  • Automatic configuration of new nodes and resources
  • Easy migration from on-prem usage to the cloud, and between cloud providers
Mobility & Scalability
Flexible GUI
  • Intuitive operation
  • Charting options
  • Operator & Data Scientist dashboards
  • Accessible through APIs
  • Drag & drop management
Deep Learning
API Driven
Support & Services
  • Tiered levels of support
  • Integration with custom authentication systems
  • Tuning & optimization for target markets or specific platforms
  • Flexible integration of custom workflow stages in the pipeline
Support & Services