Workshop: Model Training, Tuning & Evaluation With Google Teachable Machine
Background
When designers want to evaluate a design, they typically create a prototype and observe how people experience using it. Not so with machine learning practitioners: they assess their work using statistical techniques and quantitative metrics.
As a designer working on an ML-enabled product, you need to understand what metrics ML practitioners care about so that you can ensure the focus is not only on model performance in narrow terms but also in a broader sense concerning responsibility.
Learning Objectives
After completing this workshop, you can tune a machine-learning model and understand some basic concepts used in model training, tuning, and evaluation.
Instructions
- Combine the images each of your group’s members has collected during last week’s homework into one set.
- Ensure the dataset you are working with relates to your design project.
- Go to Google Teachable Machine, click “Get Started,” and create a new Image Project.
- Create your classes.
- Upload your training data to each class.
- Click open “advanced” and click “under the hood.”
- Train your model.
- Watch the graphs as they are created.
- Calculate “accuracy per class” and “confusion matrix.”
- Understand what is happening using the explanations under “vocab” and the question mark icons following the various screen elements.
- Change epoch, batch size, and learning rate to try and improve your model’s performance.
- Take a screenshot each time after you have changed settings and retrained your model, and put the screenshot on your board.
- When you have at least three screenshots, go to your online workspace board and add notes next to each part of the “under the hood” panel.
- In each note, explain what it shows and why you think the metrics are how they are.
Product
Upon completion of this activity, you will have produced the following:
- A trained model in Google Teachable Machine, uploaded to the cloud, related to your concept design project.
- On your board – three screenshots of Teachable Machine, each with a different set of training settings:
- For each screenshot, make sure the advanced training settings are visible (epochs, batch size, etc.)
- Ensure the under-the-hood panel is visible and the ‘accuracy per class’ and ‘confusion matrix’ are shown.
- Add a note next to each item of the under-the-hood panel explaining why you think the metrics shown are the way they are – four notes in total: (1) accuracy per class, (2) confusion matrix, (3) accuracy per epoch, and (4) loss per epoch.
Follow-up
We will discuss your findings during the plenary later in the day.