Comparing two LLMs So far, for this lab, we have used the model Mistral-7B Instruct v2. Although lighter than other models, it is still quite heavy and we need a large GPU to run it. Would we get as good results with a smaller model running on a CPU only? Let’s try! In this exercise, we’ll pitch our previous model against a much smaller LLM called flan-t5-large. We’ll compare the results and see if the smaller model is good enough for our use case. From the parasol-insurance/lab-materials/03 folder, please open the notebook called 03-04-comparing-model-servers.ipynb and follow the instructions. When done, you can close the notebook and head to the next page. 3.3 Information Extraction 3.5 Retrieval-Augmented Generation