Testing the model API
Now that you’ve deployed the model, you can test its API endpoints.
Procedure
-
In the OpenShift AI dashboard, navigate to the project details page and click the Models tab.
-
Take note of the model’s Inference endpoint. You need this information when you test the model API.
-
Return to the Jupyter environment and try out your new endpoint.
If you deployed your model with multi-model serving, follow the directions in
3_rest_requests_multi_model.ipynb
to try a REST API call and4_grpc_requests_multi_model.ipynb
to try a gRPC API call.If you deployed your model with single-model serving, follow the directions in
5_rest_requests_single_model.ipynb
to try a REST API call.