Testing the model API
Now that you’ve deployed the model, you can test its API endpoints.
-
In the OpenShift AI dashboard, navigate to the project details page and click the Models tab.
-
Take note of the model’s Inference endpoint URL. You need this information when you test the model API.
If the Inference endpoint field contains an Internal Service link, click the link to open a text box that shows the URL.
-
Return to the Jupyter environment and try out your new endpoint.
If you deployed your model with multi-model serving, follow the directions in
3_rest_requests_multi_model.ipynb
to try a REST API call and4_grpc_requests_multi_model.ipynb
to try a gRPC API call.If you deployed your model with single-model serving, follow the directions in
5_rest_requests_single_model.ipynb
to try a REST API call.