📚Academy
likeone
online

Why AI Ethics Matter.

AI doesn't have morals. You do. That makes you responsible for how you use it.

After this lesson you'll know

  • Why AI ethics isn't just for researchers and lawmakers
  • The 3 real-world harms that happen when ethics are ignored
  • Your personal responsibility as an AI user
  • The ethical framework we'll use throughout this course

AI is already making decisions that affect people's lives.

This isn't hypothetical. Right now, AI systems are screening resumes, approving loans, recommending medical treatments, moderating what billions of people see online, and helping write the news. These aren't abstract problems for academics to debate — they affect real people, today.

And here's what most people miss: you don't need to build AI to have ethical responsibilities around it. If you use AI to write a job description, draft a policy, analyze customer data, or create content — the ethical implications land on you.

AI is a power tool. Like every power tool, it can build or destroy depending on who's holding it and what they understand about it.

3 harms that happen when ethics are ignored.

1
Bias Amplification

AI trained on historical data inherits historical biases. A hiring tool trained on 20 years of resumes from a male-dominated industry will penalize women's resumes — not because it's sexist, but because the data was. The AI doesn't know the difference between a pattern and a prejudice.

2
Misinformation at Scale

AI can generate convincing but false content faster than humans can fact-check it. A single person with AI can produce thousands of fake articles, reviews, or social media posts. When AI-generated content is published without verification, misinformation spreads at machine speed.

3
Privacy Erosion

Every prompt you send to an AI model is data. Pasting customer information, private conversations, proprietary code, or personal details into AI tools raises serious questions about who can access that data, how it's stored, and whether it's used to train future models.

You are the ethics layer.

AI doesn't evaluate whether its output is ethical. It doesn't know if a job description it wrote subtly discourages women from applying. It doesn't know that the "fun fact" it generated is actually false. It doesn't know that the email it drafted crosses a professional boundary.

You do. And that makes you the last line of defense between AI output and real-world impact.

This course isn't about making you feel guilty for using AI. It's about making you effective at using AI without causing harm — to others or to yourself.

When AI ethics failures made headlines.

These aren't hypothetical scenarios. These are real events that happened because ethical guardrails were missing or ignored. Each one illustrates why AI ethics is a practical concern, not an academic exercise.

Amazon's Hiring Tool (2018)

Amazon built an AI recruiting tool trained on 10 years of resumes. The system learned to penalize resumes containing the word "women's" (as in "women's chess club") and downgraded graduates of all-women's colleges. It reflected the bias in the historical data: a male-dominated tech industry. Amazon scrapped the tool.

COMPAS Recidivism Algorithm

The COMPAS algorithm, used in U.S. courts to predict recidivism risk, was found to be nearly twice as likely to falsely flag Black defendants as future criminals compared to white defendants. Judges used these scores in sentencing decisions, affecting real people's freedom.

Lawyer's Fake Citations (2023)

A New York attorney used ChatGPT to research a legal brief and submitted it to court with six case citations — none of which existed. The AI had hallucinated plausible-sounding cases complete with docket numbers. The lawyer was sanctioned. Trust AI for brainstorming, not for facts.

Healthcare Algorithm Racial Bias (2019)

A healthcare algorithm used by hospitals to prioritize patients for extra care was found to systematically deprioritize Black patients. It used healthcare costs as a proxy for health needs — but because Black patients historically had less access to healthcare, their costs were lower, making them appear "healthier" to the algorithm.

Ethical frameworks beyond TRUST.

TRUST is the practical framework we'll use throughout this course, but it doesn't exist in a vacuum. Understanding the broader philosophical traditions behind AI ethics helps you reason through situations the framework doesn't explicitly cover.

Consequentialism — Judge AI use by its outcomes. Does this AI application create more good than harm? Who benefits, who is harmed, and are the benefits distributed fairly?
Deontology — Some actions are inherently right or wrong regardless of outcomes. Using AI to deceive people is wrong even if no one gets hurt. Respecting privacy is right even when violating it would be efficient.
Virtue Ethics — What kind of person do you want to be? Using AI ethically isn't just about following rules — it's about developing the character traits (honesty, responsibility, care for others) that lead to consistently good decisions.
Care Ethics — Prioritize relationships and the impact on vulnerable people. When making AI decisions, ask: who is most vulnerable in this situation? How does this affect people with less power, fewer resources, or greater risk?

Stakeholder impact analysis: who is affected by your AI use?

Every time you use AI, multiple stakeholders are affected — most of whom you've never thought about. Running a quick stakeholder impact analysis before high-stakes AI use helps you anticipate problems.

Direct
People who directly receive AI output
Clients who read AI-assisted reports, candidates screened by AI-generated criteria, customers who interact with AI chatbots, students who learn from AI-created materials. These people bear the most immediate impact of quality and bias.
Indirect
People affected by decisions based on AI output
If AI-assisted analysis leads to a business decision that affects employees, communities, or markets — those people are stakeholders even though they never saw the AI output directly.
Silent
People whose data trained the AI
The millions of people whose writing, art, code, and conversations were used to train AI models. They had no say in how their work would be used, and they receive no compensation when AI produces value from their collective contributions.
Future
Future users and society
How we use AI today shapes the norms, regulations, and expectations for tomorrow. Responsible use now creates a healthier AI ecosystem for everyone who comes after us. Irresponsible use creates the case studies future ethics courses will teach from.

The TRUST framework for ethical AI use.

T
Transparency
Be honest about when and how you're using AI. Don't pass AI work off as purely human.
R
Review
Always review AI output before using it. Check for accuracy, bias, and appropriateness.
U
Understand Limits
Know what AI can and cannot do. Don't trust it for medical, legal, or financial decisions without expert verification.
S
Safeguard Privacy
Never share sensitive personal data, credentials, or confidential information with AI without understanding data policies.
T
Take Responsibility
You own the output. If AI generates something harmful, inaccurate, or biased — and you publish it — that's on you.

We'll explore each of these principles in depth throughout this course. By the end, they'll be second nature.

Run a TRUST audit on your own AI use.

Copy this prompt into any AI tool and fill in the brackets. It will walk you through the TRUST framework on a real task you've done recently.

Prompt — TRUST Framework Ethics Audit
I used AI to help with this task: [describe the task, e.g. "draft a job posting for a marketing manager"].

Walk me through the TRUST framework for this specific use case:

1. TRANSPARENCY — Should I disclose that AI helped? Who needs to know?
2. REVIEW — What specific things should I check in the output before using it?
3. UNDERSTAND LIMITS — Where might AI have gotten this wrong or missed nuance?
4. SAFEGUARD PRIVACY — Did I share any data I shouldn't have? What should I anonymize next time?
5. TAKE RESPONSIBILITY — If this output causes harm, what's my exposure?

Be specific to my task. Give me a checklist I can use right now.

Without ethics vs. with TRUST.

Test your recall on the TRUST framework.

Check your understanding.

Academy
Built with soul — likeone.ai