Enabling data science pipelines
If you do not intend to complete the pipelines section of the workshop you can skip this step and move on to the next section, Create a Workbench. |
In this section, you prepare your workshop environment so that you can use data science pipelines.
In this workshop, you implement an example pipeline by using the JupyterLab Elyra extension. With Elyra, you can create a visual end-to-end pipeline workflow that can be executed in OpenShift AI.
-
You have installed local object storage buckets and created data connections, as described in Storing data with data connections.
-
In the OpenShift AI dashboard, click Data Science Projects and then select Fraud Detection.
-
Click the Pipelines tab.
-
Click Configure pipeline server.
-
In the Configure pipeline server form, in the Access key field next to the key icon, click the dropdown menu and then click Pipeline Artifacts to populate the Configure pipeline server form with credentials for the data connection.
-
Leave the database configuration as the default.
-
Click Configure pipeline server.
-
Wait until the loading spinner disappears and Start by importing a pipeline is displayed.
You must wait until the pipeline configuration is complete before you continue and create your workbench. If you create your workbench before the pipeline server is ready, your workbench will not be able to submit pipelines to it.
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
You can also ask your OpenShift AI administrator to verify that self-signed certificates are added to your cluster as described in Working with certificates.
-
Navigate to the Pipelines tab for the project.
-
Next to Import pipeline, click the action menu (⋮) and then select View pipeline server configuration.
An information box opens and displays the object storage connection information for the pipeline server.