Workshop: Model Training, Tuning & Evaluation With Google Teachable Machine
Background
When a designer wants to evaluate a design, they typically create a prototype and observe how people experience using it. Not so with machine learning practitioners: they evaluate their work using statistical techniques and quantitative metrics.
As a designer working on an ML-enabled product, you need to understand what metrics ML practitioners care about, so that you can ensure the focus is not only on model performance in narrow terms, but also in a broader sense, with regards to responsibility.
Learning Objectives
After completing this workshop you will be able to tune a machine learning model, and will understand some basic concepts used in model training, tuning, and evaluation.
Instructions
- Combine the images each of your group’s members has collected during last week’s homework into one set
- Make sure the dataset you are working with relates to your design project
- Go to Google Teachable Machine, click “Get Started” and create a new Image Project
- Create your classes
- Upload your training data to each class
- Click open “advanced” and click “under the hood”
- Train your model
- Watch the graphs as they are created
- Calculate “accuracy per class” and “confusion matrix”
- Use the explanations under “vocab” and the question mark icons next to the various screen elements to try and understand what is going on
- Change epoch, batch size, and learning rate to try and improve the performance of your model
- Take a screen shot each time after you have changed settings and retrained your model, and put the screenshot on your miro board
- When you have at least three screenshots, go to Miro, and add notes next to each part of the “under the hood” panel
- In each note, explain what it shows, and why you think the metrics are the way they are
Product
Upon completion of this activity you will have produced the following:
- A trained model in Google Teachable Machine, uploaded to the cloud, that pertains to your design assignment.
- On your miro board – three screenshots of teachable machine, each with a different set of training settings:
- For each screenshot make sure the advanced training settings are visible (epochs, batch size, etc.)
- Make sure the under the hood panel is visible, and accuracy per class and confusion matrix are shown
- Add a note next to each item of the under the hood panel explaining why you think the metrics shown are the way they are – four notes in total: (1) accuracy per class, (2) confusion matrix; (3) accuracy per epoch; (4) loss per epoch
Follow-up
We will discuss your findings during the plenary later in the day.