Execution

Start by going to your Data Science project and create a new workbench with these settings:

  • Image: TrustyAI

  • Size: Standard

Feel free to pick any name for the workbench.

After that’s done, clone this repository into the workbench: https://github.com/rh-aiservices-bu/ai-mazing-race.

https://github.com/rh-aiservices-bu/ai-mazing-race.git

Go to the folder lab-materials/07 where you will find the model and code to help you analyze and build new ones.

Task 1 - Understand what’s wrong

Run through the code in notebook 1_model_training_and_analyzis to build a model and see what might be wrong with it. How does it perform on the evaluation, are there any features that seem to be suspiciously impactful?

Task 2 - Propose and test changes

After you have made some thoughts on what might be wrong, go back to the start of the notebook and change what features are used and how the data is processed. The goal is to get as small of an error as possible while also having no/few negative prices (they are typically bad for business). You can change the datapoint you are analyzing with Counterfactual and SHAP by changing the value of DATAPOINT, do this to test with both points that predict below 0 and the worst prediction.

If you re-run the notebook, make sure to restart the kernel so that no results from the previous session are introduced in your new one. One very easy way to do this is to click this fast-forward button:

run all button

Click to open the 15-minute timer