Testing the model API

Now that you’ve deployed the model, you can test its API endpoints.

Procedure
  1. In the OpenShift AI dashboard, navigate to the project details page and click the Models tab.

  2. Take note of the model’s Inference endpoint URL. You need this information when you test the model API.

    Model inference endpoint

    If the Inference endpoint field contains an Internal Service link, click the link to open a text box that shows the URL.

  3. Return to the Jupyter environment and try out your new endpoint.

    If you deployed your model with multi-model serving, follow the directions in 3_rest_requests_multi_model.ipynb to try a REST API call and 4_grpc_requests_multi_model.ipynb to try a gRPC API call.

    If you deployed your model with single-model serving, follow the directions in 5_rest_requests_single_model.ipynb to try a REST API call.