Confidence-Check Pipeline

What will the pipeline do?

To make sure that everything works as we would expect it to, and that the model has not been tampered with, we will create a confidence-check pipeline that tests the model through its endpoint.
We will test the response time, the response quality, and that the model hash has not changed.
And to make sure it stays the same over time, we’ll schedule that pipeline.

Deploy a confidence-check pipeline

In the parasol-insurance/lab-materials/03/06 folder there are two pipeline files, one confidence_check.pipeline and one confidence_check.yaml file.

The .pipeline file can be opened in Elyra to be visually modified and executed, while the .yaml file can be imported into the pipeline server through the RHOAI Dashboard.
Here we will be running the pipeline through Elyra.

What is Elyra?

Elyra is a visual interface to build out pipelines. Think of it as your standard code editor, but drag-and-drop. This also means that we won’t execute anything in/with Elyra, but just produce some code (or more specifically, a json in the case of Elyra) that later will be sent elsewhere for execution.

In Elyra, you can drag-and-drop in Python, notebook, or R files into the dashboard and then connect them up into a workflow.
You also have a button on the top-right side that allows you to expand additional settings for the pipeline and each step.

06 elyra menu

Using Elyra, you can get started quicly with prototyping and running pipelines.

Ad-Hoc execution

Running it through Elyra is the same as doing an ad-hoc execution of the pipeline, as opposed to importing the pipeline which won’t automatically execute it.

  1. Start by going to your running workbench

  2. Navigate to the folder parasol-insurance/lab-materials/03/06

  3. Open up the confidence_check.pipeline file

  4. Here we can see that the pipeline consists of 3 checks:

    1. response quality check

    2. response time check

    3. security check

  5. Feel free to peek into each of the python files by double clicking on the nodes to see what they do.

  6. After the tests have been run, we have a final function that will summarize the results and log them.

  7. To run the pipeline, press the Play button in the menu bar.

    elyra confidence pipeline
  8. You may get a warning that the pipeline is unsaved, this is normal, just press Save and Submit if this happens.

    03 06 unsaved changes
  9. In the next popup, leave the name unchanged and click OK:

    03 07 run pipeline ok
  10. When you get a popup that says Job submission to Data Science Pipelines succeeded, click the link Run details to see how the pipeline is progressing.

Schedule execution

We can also schedule an execution so that the confidence check is executed at regular intervals.
To do that:

  1. Go back to the Red Hat OpenShift AI Data Science Project

  2. Find the pipeline you just ran

  3. Click the 3 dots at the very end of the line, and click "Create run".

    create run
  4. On the next screen, choose a name, select a Periodic run to trigger every Day and click Create:

    03 06 dailyrun
  5. We can now leave the confidence-check pipeline alone.

  6. It will run daily, and will inform us if anything goes wrong with our LLM.