Running your pipeline
Upload the pipeline on your cluster and run it. You can do so directly from the pipeline editor. You can use your own newly created pipeline or the pipeline in the provided 6 Train Save.pipeline
file.
-
You set the S3 storage bucket keys, as described in Configuring the connection to storage.
-
Click the play button in the toolbar of the pipeline editor.
-
Enter a name for your pipeline.
-
Verify that the Runtime Configuration: is set to
Data Science Pipeline
. -
Click OK.
If you see an error message stating that "no runtime configuration for Data Science Pipeline is defined", you might have created your workbench before the pipeline server was available.
To address this error, you must verify that you configured the pipeline server and then restart the workbench.
-
Follow these steps in the OpenShift AI dashboard:
-
Check the status of the pipeline server:
-
In your Fraud Detection project, click the Pipelines tab.
-
If you see the Configure pipeline server option, follow the steps in Enabling data science pipelines.
-
If you see the Import a pipeline option, the pipeline server is configured. Continue to the next step.
-
-
-
Restart your Fraud Detection workbench:
-
Click the Workbenches tab.
-
Click Stop and then click Stop workbench.
-
After the workbench status is Stopped, click Start.
-
Wait until the workbench status is Running.
-
-
Return to your workbench’s JupyterLab environment and run the pipeline.
-
-
In the OpenShift AI dashboard, open your data science project and expand the newly created pipeline.
-
Click View runs.
-
Click your run and then view the pipeline run in progress.
The models/fraud/1/model.onnx
file is in your S3 bucket. You can serve the model, as described in Preparing a model for deployment.
(Optional) Running a data science pipeline generated from Python code