Skip to main content Link Search Menu Expand Document (external link)

Workshop: Model Training, Tuning & Evaluation With Google Teachable Machine

Background

When a designer wants to evaluate a design, they typically create a prototype and observe how people experience using it. Not so with machine learning practitioners: they evaluate their work using statistical techniques and quantitative metrics.

As a designer working on an ML-enabled product, you need to understand what metrics ML practitioners care about, so that you can ensure the focus is not only on model performance in narrow terms, but also in a broader sense, with regards to responsibility.

Learning Objectives

After completing this workshop you will be able to tune a machine learning model, and will understand some basic concepts used in model training, tuning, and evaluation.

Instructions

  1. Combine the images each of your group’s members has collected during last week’s homework into one set
  2. Make sure the dataset you are working with relates to your design project
  3. Go to Google Teachable Machine, click “Get Started” and create a new Image Project
  4. Create your classes
  5. Upload your training data to each class
  6. Click open “advanced” and click “under the hood”
  7. Train your model
  8. Watch the graphs as they are created
  9. Calculate “accuracy per class” and “confusion matrix”
  10. Use the explanations  under “vocab” and the question mark icons next to the various screen elements to try and understand what is going on
  11. Change epoch, batch size, and learning rate to try and improve the performance of your model
    • Take a screen shot each time after you have changed settings and retrained your model, and put the screenshot on your miro board
  12. When you have at least three screenshots, go to Miro, and add notes next to each part of the “under the hood” panel
  13. In each note, explain what it shows, and why you think the metrics are the way they are

Product

Upon completion of this activity you will have produced the following:

  1. A trained model in Google Teachable Machine, uploaded to the cloud, that pertains to your design assignment.
  2. On your miro board – three screenshots of teachable machine, each with a different set of training settings:
    • For each screenshot make sure the advanced training settings are visible (epochs, batch size, etc.)
    • Make sure the under the hood panel is visible, and accuracy per class and confusion matrix are shown
    • Add a note next to each item of the under the hood panel explaining why you think the metrics shown are the way they are – four notes in total: (1) accuracy per class, (2) confusion matrix; (3) accuracy per epoch; (4) loss per epoch

Follow-up

We will discuss your findings during the plenary later in the day.