Deploying a model Now that the model is accessible in storage and saved in the portable ONNX format, you can use an OpenShift AI model server to deploy it as an API. OpenShift AI now has two options for model serving: multi-model serving-platform single-model serving-platform Review the descriptions available in the interface. Procedure In the OpenShift AI dashboard Navigate to Models and model servers. Under Single model serving platform, click Deploy model. In the form: Fill out the Model Name with the value fraud. Select the Serving runtime, OpenVINO Model Server. Select the Model framework, onnx - 1. Set the Model server replicas to 1. Select the Model Server size, Lab Custom Small. Select the Existing data connection: My Storage Enter the path to your uploaded model: models/fraud. The path does not include 1/model.onnx. The OpenVINO model server expects the format to include the integer version as part of the subpath. Click Deploy. Wait for the model to deploy and for the Status to show a green checkmark. This might take a little while if the cluster is particularly busy, but should be less than 2 minutes. 3.2.1 Saving the Model 3.2.3 Using the API