Model Serving At this point, we need to deploy the model into RHOAI model serving. We will create another data connection… with almost identical information but we will change the bucket name from userX to models Create a Data Connection In your Data Science project, create a data connection that refers to the shared minio. Here is the info you need to enter: Name: Shared Minio - model Access Key: minio Secret Key: minio123 Endpoint: http://minio.ic-shared-minio.svc.cluster.local:9000/ Region: none Bucket: models The result should look like: Create a Model Server In your project create a model server. Click Add model server Here is the info you need to enter: Model server name: My first Model Server Serving runtime: OpenVINO Model Server Number of model server replicas to deploy: 1 Model server size Standard Accelerator None Model route unchecked Token authorization unchecked The result should look like: You can click on Add to create the model server. Deploy the Model In your project, under Models and model servers select Deploy model. Click Deploy model Here is the information you will need to enter: Model name: My first Model Model server My first Model Server Model server - Model framework onnx-1 Existing data connection - Name Shared Minio - model Existing data connection - Path accident/ The result should look like: Click on Deploy. If the model is successfully deployed you will see its status as green after 15 to 30 seconds. We will now confirm that the model is indeed working by querying it! Querying the served Model Once the model is served, we can use it as an endpoint that can be queried. We’ll send a request to it, and get a result. And unlike our earlier notebook-based version, this applies to anyone working within our cluster. This could either be colleagues, or applications. First, we need to get the URL of the model server. To do this, click on the Internal Service link under the Inference endpoint column. In the popup, you will see a few URLs for our model server. Note or copy the RestUrl, which should be something like http://modelmesh-serving.userX:8008 We will now use this URL to query the model. In your running workbench, navigate to the folder insurance-claim-processing/lab-materials/04. Look for (and open) the notebook called 04-05-model-serving.ipynb. Execute the cells of the notebook, and ensure you understand what is happening. 4.4 Accident/Damage recognition 5.1 Application overview