Data Science Tutorial

This section takes the first time user through the DKube workflow using a sample program and dataset. The MNIST model is used to provide a simple, successful initial experience. The details of the full workflow and the definition of all of the fields is available at Data Science Dashboard & Workflow

More examples are available at DKube Examples

General Workflow

This section uses a simple workflow with a limited number of fields and options. A more detailed explanation of the entire workflow and all of the fields is available in the table below.

Workflow Action

Detailed Explanation

Load the program code, dataset, and model folders

Repos

Create and open a DKube JupyterLab Notebook

IDEs

Create a Training Run

Runs

Create a model from the Training Run

Models

Publish & deploy the model

Model Actions

Create Code Repo

Load the MNIST code folder from a GitHub repository into DKube from the Code menu by selecting + Code

_images/Data_Scientist_Code_Tutorial_R30.png

The fields should be filled in as follows, then select Add Code

Field

Value

Name

dkube-examples

Code Source

Git

url

https://github.com/oneconvergence/dkube-examples.git

Branch

tensorflow

_images/Data_Scientist_Code_mnist_R22.png

This will create the mnist Code repo within DKube.

_images/Data_Scientist_Code_Success_R30.png

Create Dataset Repo

Load the MNIST dataset folder from the This is accomplished from the Datasets menu by selecting + Dataset

_images/Data_Scientist_Datasets_Tutorial_R30.png

The fields should be filled in as follows, then select Add Dataset

Field

Value

Name

mnist

Versioning

DVS

Dataset Source

Other

url

https://s3.amazonaws.com/img-datasets/mnist.pkl.gz

_images/Data_Scientist_Dataset_mnist.png

This will create the mnist Dataset.

_images/Data_Scientist_Dataset_Success_R30.png

Create Model Repo

A Model needs to be created that will become the basis for the output of the Training Run later in the process. This is accomplished from the Models menu by selecting + Model

_images/Data_Scientist_Models_Tutorial_R30.png

The fields should be filled in as follows, then select Add Model

Field

Value

Name

mnist

Versioning

DVS

Model Store

default

Model Source

None

_images/Data_Scientist_Models_mnist.png

Create Notebook

Create a JupyterLab Notebook from the from the IDEs menu to experiment with the program by selecting + JupyterLab

_images/Data_Scientist_Notebooks_Tutorial_R30.png

Fill in the fields as shown.

Basic Submission Screen

Field

Value

Name

mnist

Code

dkube-examples

Framework

Tensorflow

Framework Version

2.0.0

Image

Will be filled by default - do not change

_images/Data_Scientist_Notebook_mnist_Basic_R30.png

All the other fields should be left in their default state. No not submit at this point. Select the Repos tab.

Repos Submission Screen

Field

Value

Dataset

mnist

Version

Select ver 1

Mount Path

/mnist

_images/Data_Scientist_Notebook_mnist_Repo_Dataset.png

The mount path is the path that is used within the program code to access the input dataset. This is described in more detail at Mount Path

All the other fields should be left in their default state. Select Submit to start the Notebook.

Note

The initial Notebook will take a few minutes to start. Follow-on Notebooks with the same framework version will start more quickly.

_images/Data_Scientist_Notebook_Success.png

Open JupyterLab Notebook

Open a JupyterLab notebook by selecting the Jupyter icon under Actions on the far right.

_images/Data_Scientist_Jupyter_mnist.png

The code is located at workspace/dkube-examples/mnist

There is no need to change any code in this tutorial. The instructions are meant to provide the details on how to use DKube to experiment with your program code. Your programs will have a different folder structure.

The next step creates a training run.

Note

The Training Run can be created directly from the Notebook, as described in Create Training Run . This will fill in most of the fields for the Run with the information that was provided during the IDE creation. This tutorial provides the more general Run creation.

Create Training Run

Create a Training Run from the Runs menu to train the mnist model on the dataset and create a trained model.

_images/Data_Scientist_Run_mnist_R30.png

Fill in the fields as shown.

Basic Submission Screen

Field

Value

Name

mnist

Code

dkube-examples

Framework

tensorflow

Framework Version

2.0.0

Start-up script

python mnist/train.py

_images/Data_Scientist_Run_mnist_Basic_R30.png

All the other fields should be left in their default state. Select the Repos tab.

Repos Submission Screen

In order to submit a Training Run:

  • A Dataset needs to be selected for input

  • A Model needs to be selected for output

Input Selections

Field

Value

Dataset

mnist

Version

Select ver 1

Mount Path

/mnist

The mount path is the path that is used within the program code to access the input dataset. This is described in more detail at Mount Path

_images/Data_Scientist_Run_mnist_Repo_Dataset_R22.png

Output Selection

A Model needs to be selected for the Training Run output.

Field

Value

Model

mnist

Mount Path

/model

After the fields have been completed, select Submit

_images/Data_Scientist_Run_mnist_Repo_Model_R22.png

Note

The initial Run will take a few minutes to start. Follow-on Runs with the same framework version will start more quickly.

The Training Run will appear in the Runs menu.

_images/Data_Scientist_Run_Success_R30.png

View Trained Model

Once the Run status shows Complete it indicates that a trained Model has been created. The trained Model will appear in the Models Repo.

_images/Data_Scientist_Models_Trained_mnist_R30.png

Selecting the trained Model will provide the details on the model, including the versions.

_images/Data_Scientist_Models_mnist_Detail_R30.png
  • Ver 1 of the model is the initial blank version that was created earlier in the tutorial in order to set up the versioning capability

  • Ver 2 is the new model that was created by the training run

Selecting a version will show more details on the model version, including the lineage. The lineage provides all of the inputs required to create this model.

_images/Data_Scientist_Models_mnist_Lineage_R30.png

Publish Model

_images/Model_Stages_Tutorial_R30.png

Publishing a Model identifies it as a deployment candidate for possible Production Serving by the Production Engineer. After being published, the Model stage changes for that Model version on the details screen. The detailed screen can filter the Models to show only Publlished Models by selecting this at the top of the screen.

_images/Data_Scientist_Models_Tutorial_Publish_R30.png

A Model is published from the Model details page, by a button on the far right of the line that contains the Model version to be published. Selecting the button causes a popup to appear to provide more details. Use the following inputs for the fields.

All of the fields should be left in their default state except for the following:

Field

Value

Transformer

Select

Transformer Script

mnist/transformer.py

_images/Data_Scientist_Models_Tutorial_Publish_Popup_R30.png

Once published, the model stage changes to reflect that.

_images/Data_Scientist_Models_Publish_Stage_R30.png

Deploy Model

A Deployed Model runs on the serving cluster, and exposes the APIs for live inference.

The Production Engineer deploys a Model from the Model details screen after testing it to ensure that it meets the project goals. In this tutorial, we will deploy the Model that we just Published.

The Model is deployed by selecting the Deploy button on the far right of the line.

_images/Data_Scientist_Models_Tutorial_Deploy_R30.png

This will bring up a menu to provide the deployment details. All the fields should be left in their default state except for the following.

Field

Value

Name

mnist

Deployment

Test

Transformer

Selected

Transformer Script

mnist/transformer.py

CPU/GPU

CPU

_images/Data_Scientist_Models_Tutorial_Deploy_Popup_R30.png

The deployed Model will appear in the Deployments screen. The serving endpoint is exposed for live inference.

_images/Data_Scientist_Deployments_Tutorial_R33.png