Teacher's Resources

Launch Lesson Plan
Lesson Plan: The AI Safety Inspector
Topic: Responsible AI & Ethics
Duration: 60 Minutes
Target Group: Years 7–13 (Adaptable)
Learning Objectives
By the end of this lesson, students will be able to:
- Break down any AI system into 4 stages: Data, Weights, Input, and Output.
- Identify legal and ethical risks at each of these stages.
- Apply this "Safety Check" to their own Schools Challenge project.
Resources Needed
- Whiteboard/Projector.
- Printed copies of the "4 Stages of Responsible AI" one-pager (from the Student Pack).
- Sticky notes.
1. The Hook: The "Magic" Machine (10 Mins)
Goal: Demystify AI and introduce the concept of "Hidden Costs."
Teacher Action: Draw a box on the board labeled "MAGIC BOX."
Discussion: Ask the class: "If I had a box that could paint any picture you asked for in 3 seconds, how does it do it?"
- Student Answers: "It searches Google," "It draws it," "Magic."
The Reveal: Explain that AI isn't magic; it's a factory line. It doesn't "know" anything; it mimics what it has seen.
- Ask: "If the box draws a picture of Mickey Mouse, who owns the picture? Disney? The Box? Or you?" (Let them debate; there is no single right answer, which highlights the problem).
2. Core Concept: The "Chef" Analogy (15 Mins)
Goal: Explain the 4 Stages using a non-tech metaphor.
Draw the following flow on the board. Use the Chef Analogy to explain the technical terms.
| AI Stage | The Chef Analogy | The Risk (Ask Class) |
|---|---|---|
| 1. Training Data | The Library of Cookbooks. The chef reads millions of recipes to learn what "food" is. | Did the Chef steal the recipes? Did they only read books about cake (Bias)? |
| 2. Model Weights | The Chef’s Brain. The chef doesn't remember every book, but they remember the patterns (e.g., "Sugar makes things sweet"). This is the "Product." | Who owns the Chef's skills? (Intellectual Property). |
| 3. Input | Your Order. You walk in and say, "Make me a sandwich with my name written in ham." | Did you give the Chef your personal info? Is it safe? |
| 4. Output | The Meal. The sandwich appears. | What if the sandwich is poisonous? (Hallucinations). Who pays for the hospital? |
3. Activity: The "Safety Inspector" (25 Mins)
Goal: Practical application of the 4-part model.
Instructions:
Split the class into small groups. Give each group one Scenario Card (below). They must act as "Safety Inspectors" and find the failure in the 4 stages.
Scenario A: The "Copycat" Art Generator
An app allows users to type "Paint me a hero," and it generates art that looks exactly like the style of a famous living artist, "Artist X." The app developers scraped Artist X's website to train the model.
- The Flaw: Training Data. They used copyrighted work without permission/license.
Scenario B: The "Medical" Chatbot
A school installs a mental health chatbot. To use it, students must type in their full name and list their symptoms. The chatbot stores this data on a public server to "learn" for next time.
- The Flaw: Input. Serious privacy violation. Students are inputting sensitive medical data that is not being protected.
Scenario C: The "Hiring" Manager
A company uses AI to scan CVs. It was trained on 10 years of past hiring data. It keeps rejecting female applicants because, in the past, the company mostly hired men.
- The Flaw: Training Data / Model Weights. The data has historical bias, so the model "Weights" have learned that "Men = Good Employees."
Report Back: Ask each group to present their "Inspection Report" and how they would fix it.
4. Plenary: Applying it to the Challenge (10 Mins)
Goal: Link back to their project deliverables.
Teacher Action: Hand out the System Card Template (or point to it on screen).
Task: Look at Section 4 ("Risks & Limitations").
Closing Question:
"For your Council Challenge project, I want you to answer one question before you leave:"
- If you are building an app for the Council, where does your Training Data come from?
- Option A: We are training it ourselves (Safe, but hard).
- Option B: We are using ChatGPT/Google (Easy, but you must admit you don't know the sources).
Homework: Teams must draft the "Responsible AI" section of their System Card using the 4-part model.
Teacher’s Cheat Sheet: Key Definitions
- Scraping: Downloading data from the web automatically (often a legal grey area).
- Hallucination: When an AI confidently states a fact that is completely false.
- Black Box: When we don't know how the AI made a decision (common in "Closed" models).
TVAI Support
TVAI Volunteers may be available to run a 30 minute workshop with your teams but this is dependant on the availability and timing, reach out using the email address for this.