Why AI Ethics Matter.
AI doesn't have morals. You do. That makes you responsible for how you use it.
After this lesson you'll know
- Why AI ethics isn't just for researchers and lawmakers
- The 3 real-world harms that happen when ethics are ignored
- Your personal responsibility as an AI user
- The ethical framework we'll use throughout this course
AI is already making decisions that affect people's lives.
This isn't hypothetical. Right now, AI systems are screening resumes, approving loans, recommending medical treatments, moderating what billions of people see online, and helping write the news. These aren't abstract problems for academics to debate — they affect real people, today.
And here's what most people miss: you don't need to build AI to have ethical responsibilities around it. If you use AI to write a job description, draft a policy, analyze customer data, or create content — the ethical implications land on you.
AI is a power tool. Like every power tool, it can build or destroy depending on who's holding it and what they understand about it.
3 harms that happen when ethics are ignored.
AI trained on historical data inherits historical biases. A hiring tool trained on 20 years of resumes from a male-dominated industry will penalize women's resumes — not because it's sexist, but because the data was. The AI doesn't know the difference between a pattern and a prejudice.
AI can generate convincing but false content faster than humans can fact-check it. A single person with AI can produce thousands of fake articles, reviews, or social media posts. When AI-generated content is published without verification, misinformation spreads at machine speed.
Every prompt you send to an AI model is data. Pasting customer information, private conversations, proprietary code, or personal details into AI tools raises serious questions about who can access that data, how it's stored, and whether it's used to train future models.
You are the ethics layer.
AI doesn't evaluate whether its output is ethical. It doesn't know if a job description it wrote subtly discourages women from applying. It doesn't know that the "fun fact" it generated is actually false. It doesn't know that the email it drafted crosses a professional boundary.
You do. And that makes you the last line of defense between AI output and real-world impact.
This course isn't about making you feel guilty for using AI. It's about making you effective at using AI without causing harm — to others or to yourself.
When AI ethics failures made headlines.
These aren't hypothetical scenarios. These are real events that happened because ethical guardrails were missing or ignored. Each one illustrates why AI ethics is a practical concern, not an academic exercise.
Amazon built an AI recruiting tool trained on 10 years of resumes. The system learned to penalize resumes containing the word "women's" (as in "women's chess club") and downgraded graduates of all-women's colleges. It reflected the bias in the historical data: a male-dominated tech industry. Amazon scrapped the tool.
The COMPAS algorithm, used in U.S. courts to predict recidivism risk, was found to be nearly twice as likely to falsely flag Black defendants as future criminals compared to white defendants. Judges used these scores in sentencing decisions, affecting real people's freedom.
A New York attorney used ChatGPT to research a legal brief and submitted it to court with six case citations — none of which existed. The AI had hallucinated plausible-sounding cases complete with docket numbers. The lawyer was sanctioned. Trust AI for brainstorming, not for facts.
A healthcare algorithm used by hospitals to prioritize patients for extra care was found to systematically deprioritize Black patients. It used healthcare costs as a proxy for health needs — but because Black patients historically had less access to healthcare, their costs were lower, making them appear "healthier" to the algorithm.
Ethical frameworks beyond TRUST.
TRUST is the practical framework we'll use throughout this course, but it doesn't exist in a vacuum. Understanding the broader philosophical traditions behind AI ethics helps you reason through situations the framework doesn't explicitly cover.
Stakeholder impact analysis: who is affected by your AI use?
Every time you use AI, multiple stakeholders are affected — most of whom you've never thought about. Running a quick stakeholder impact analysis before high-stakes AI use helps you anticipate problems.
The TRUST framework for ethical AI use.
We'll explore each of these principles in depth throughout this course. By the end, they'll be second nature.
Run a TRUST audit on your own AI use.
Copy this prompt into any AI tool and fill in the brackets. It will walk you through the TRUST framework on a real task you've done recently.
I used AI to help with this task: [describe the task, e.g. "draft a job posting for a marketing manager"].
Walk me through the TRUST framework for this specific use case:
1. TRANSPARENCY — Should I disclose that AI helped? Who needs to know?
2. REVIEW — What specific things should I check in the output before using it?
3. UNDERSTAND LIMITS — Where might AI have gotten this wrong or missed nuance?
4. SAFEGUARD PRIVACY — Did I share any data I shouldn't have? What should I anonymize next time?
5. TAKE RESPONSIBILITY — If this output causes harm, what's my exposure?
Be specific to my task. Give me a checklist I can use right now.