Testing the model API

After you deploy the model, you can test its API endpoints.

Procedure
  1. In the OpenShift AI dashboard, navigate to the project details page and click the Deployments tab.

  2. Take note of the model’s Inference endpoint URL. You need this information when you test the model API.

    If the Inference endpoint field has an Internal endpoint details link, click the link to open a text box that shows the URL details, and then take note of the restUrl value.

    Model inference endpoint

    NOTE: When you test the model API from inside a workbench, you must include port 8888 as part of the endpoint. For example: http://fraud-predictor.fraud-detection.svc.cluster.local:8888

  3. Return to the JupyterLab environment and try out your new endpoint.

    Follow the directions in 3_rest_requests.ipynb to try a REST API call.