Automating workflows with data science pipelines

In previous sections of this lab, you used a notebook to train and save your model.
If you want to automate these tasks, you can use Red Hat OpenShift AI pipelines.
Pipelines offer a way to automate the execution of multiple notebooks and Python code.
By using pipelines, you can execute long training jobs or retrain your models on a schedule without having to manually run them in a notebook.

In this section, you create a simple pipeline by using the GUI pipeline editor. The pipeline uses the notebook that you used in previous sections, to train a model and then save it to S3 storage.

Note: Your completed pipeline should look like the one in the 4 Train Save.pipeline file.

Feel free to run and use 4 Train Save.pipeline. If you would like to explore the pipeline editor, you can create your own pipeline and follow the steps below.

Create a pipeline

  1. Open your workbench’s JupyterLab environment. If the launcher is not visible, click + to open it.

    Pipeline buttons
  2. Click Pipeline Editor.

    Pipeline Editor button

    You’ve created a blank pipeline!

  3. Set the default runtime image for when you run your notebook or python code.

    1. In the pipeline editor, click Open Panel

      Open Panel
    2. Select the Pipeline Properties tab.

      Pipeline Properties Tab
    3. In the Pipeline Properties panel, scroll down to Generic Node Defaults and Runtime Image. Set the value to Tensorflow with Cuda and Python 3.9 (UBI 9)

      Pipeline Runtime Image0
  4. Save the pipeline.

Add nodes to your pipeline

Add some steps, or nodes in your pipeline. Your two nodes will use the 1_experiment_train.ipynb and 2_save_model.ipynb notebooks.

  1. From the file-browser panel, drag the 1_experiment_train.ipynb and 2_save_model.ipynb notebooks onto the pipeline canvas.

    Drag and Drop Notebooks
  2. Click the output port of 1_experiment_train.ipynb and drag a connecting line to the input port of 2_save_model.ipynb.

    Connect Nodes
  3. Save the pipeline.

Specify the training file as a dependency

Set node properties to specify the training file as a dependency.

Note: If you don’t set this file dependency, the file won’t be included in the node when it runs and the training job would fail.

  1. Click the 1_experiment_train.ipynb node.

    Select Node 1
  2. In the Properties panel, click the Node Properties tab.

  3. Scroll down to the File Dependencies section and then click Add.

    Add File Dependency
  4. Set the value to data/card_transdata.csv which contains the data to train your model.

    Set File Dependency Value
  5. Save the pipeline.

Create and store the ONNX-formatted output file

In node 1, the notebook creates the models/fraud/1/model.onnx file. In node 2, the notebook uploads that file to the S3 storage bucket.

  1. Select node 1 and then select the Node Properties tab.

  2. Scroll down to the Output Files section, and then click Add.

  3. Set the value to models/fraud/1/model.onnx.

    Set file dependency value

Configure the data connection to the S3 storage bucket

In node 2, the notebook uploads the model to the S3 storage bucket.

You must set the S3 storage bucket keys by using the secret created by the My Storage data connection that you set up in the Storage Data Connections section of this lab.

You can use this secret in your pipeline nodes without having to save the information in your pipeline code. This is important, for example, if you want to save your pipelines - without any secret keys - to source control.

The secret is named aws-connection-my-storage.

If you called your data connection something other than My Storage, you can obtain the secret name in the Data Science dashboard by hovering over the resource information icon ? in the Data Connections tab.

My Storage Secret Name

The aws-connection-my-storage secret includes the following fields:

  • AWS_ACCESS_KEY_ID

  • AWS_DEFAULT_REGION

  • AWS_S3_BUCKET

  • AWS_S3_ENDPOINT

  • AWS_SECRET_ACCESS_KEY

You must set the secret name and key for each of these fields.

  1. Remove any pre-filled environment variables.

    1. Select node 2, and then select the Node Properties tab.

      Under Additional Properties, note that some environment variables have been pre-filled. The pipeline editor inferred that you’d need them from the notebook code.

      Since you don’t want to save the value in your pipelines, remove all of these environment variables.

    2. Click Remove for each of the pre-filled environment variables.

      Remove Env Var
  2. Add the S3 bucket and keys by using the Kubernetes secret.

    1. Under Kubernetes Secrets, click Add.

      Add Kube Secret
    2. Enter the following values and then click Add.

      • Environment Variable: AWS_ACCESS_KEY_ID

      • Secret Name: aws-connection-my-storage

      • Secret Key: AWS_ACCESS_KEY_ID

        Secret Form
    3. Repeat Steps 3a and 3b for each set of these Kubernetes secrets:

      • Environment Variable: AWS_SECRET_ACCESS_KEY

        • Secret Name: aws-connection-my-storage

        • Secret Key: AWS_SECRET_ACCESS_KEY

      • Environment Variable: AWS_S3_ENDPOINT

        • Secret Name: aws-connection-my-storage

        • Secret Key: AWS_S3_ENDPOINT

      • Environment Variable: AWS_DEFAULT_REGION

        • Secret Name: aws-connection-my-storage

        • Secret Key: AWS_DEFAULT_REGION

      • Environment Variable: AWS_S3_BUCKET

        • Secret Name: aws-connection-my-storage

        • Secret Key: AWS_S3_BUCKET

  3. Save and Rename the .pipeline file.

Run the Pipeline

Upload the pipeline on the cluster itself and run it. You can do so directly from the pipeline editor. You can use your own newly created pipeline for this or 4 Train Save.pipeline.

  1. Click the play button in the toolbar of the pipeline editor.

    Pipeline Run Button
  2. Enter a name for your pipeline.

  3. Verify the Runtime Configuration: is set to Data Science Pipeline.

  4. Click OK.

    If Data Science Pipeline is not available as a runtime configuration, you may have created your notebook before the pipeline server was available. You can stop and edit the description on your notebook then restart it. If this is done after the pipeline server has been created, you will see the Data Science Pipeline option.
  1. Return to your data science project and expand the newly created pipeline.

    dsp pipeline complete
  2. Click the pipeline or the pipeline run and then view the pipeline run in progress.

    pipeline run complete

The result should be a models/fraud/1/model.onnx file in your S3 bucket which you can serve, just like you did manually in the Preparing a model for deployment section.