Teaching AI Critical Thinking.
How to raise kids who question what AI tells them.
After this lesson you'll know
- Why AI can be confidently wrong — and how to spot it
- Age-appropriate exercises for evaluating AI outputs
- How to teach source verification as a habit
- The difference between AI confidence and accuracy
AI sounds right even when it's wrong.
This is the single most important thing to teach your child about AI: it generates text that sounds authoritative, polished, and certain — even when it's completely made up. AI doesn't know what's true. It knows what sounds true based on patterns. Those are very different things.
AI "hallucinations" — when AI invents facts, citations, or events — don't come with warning labels. There's no red text or alarm bell. The made-up fact sits right next to real facts in the same confident tone. Adults fall for this constantly. Kids are even more vulnerable.
Three games that build critical thinking.
1. Fact or Fabrication (ages 8+): Ask AI a question you already know the answer to. Have your child identify which parts of the answer are correct and which might be wrong. Start with topics they know well — their favorite sport, a book they've read, their hometown.
2. The Source Challenge (ages 10+): Give your child an AI-generated paragraph about a historical event. Their job: find a real source (book, encyclopedia, reputable website) that confirms or contradicts each claim. Keep score — how accurate was AI?
3. Bias Detective (ages 12+): Ask AI the same question three different ways and compare the answers. "Was Columbus a hero?" vs. "Was Columbus a villain?" vs. "What did Columbus do?" Show your child how framing changes the output — and discuss why.
This lesson is for Pro members
Unlock all 518+ lessons across 52 courses with Academy Pro.
Already a member? Sign in to access your lessons.