Bias in AI.
AI doesn't create bias. It amplifies the bias already in our data, our systems, and our language.
After this lesson you'll know
- Where AI bias actually comes from
- The 4 types of bias you'll encounter most often
- How to spot bias in AI output
- Practical techniques to reduce bias in your own AI use
Bias doesn't come from the algorithm. It comes from the data.
AI learns from text written by humans. Humans have biases. Therefore, AI has biases. This isn't a bug — it's an inevitable consequence of how these systems are built.
If the training data contains more articles about male CEOs than female ones, the AI will associate leadership with men. If historical loan data shows fewer approvals for certain zip codes (a proxy for race), an AI trained on that data will replicate the pattern.
The AI isn't making a moral judgment. It's doing math on patterns. But patterns from an unequal world produce unequal outputs.
4 types of bias you'll encounter.
Some groups are over- or under-represented in training data. AI knows more about some cultures, languages, and experiences than others.
AI tends to agree with you. If you phrase a question with an implied answer, AI will often reinforce your existing belief rather than challenge it.
Most AI training data is English-language and Western-centric. Advice, examples, and frameworks skew toward North American and European perspectives.
AI has a knowledge cutoff. It doesn't know about events, research, or cultural shifts after its training data ends.
How to spot bias in AI output.
Bias is often subtle. Here are the red flags to watch for:
5 techniques to reduce bias in your AI use.
Review the 4 types of AI bias.
Match each bias type to its description.
4 Types of AI Bias
Tap one on the left, then its match on the right