Debugging Bad Outputs
When AI gets it wrong, the problem is almost always in the prompt. Here's how to find it.
What You'll Learn
- The 5 most common reasons AI output goes wrong
- A systematic debugging framework for prompts
- How to diagnose vague, wrong, or off-tone responses
- Iterative refinement: making prompts better fast
Bad Output Is Feedback, Not Failure
When the AI gives you something wrong, it's telling you something about your prompt. Maybe the instructions were ambiguous. Maybe the context was missing. Maybe you assumed the AI knew something it didn't. Every bad output is a clue pointing to a specific fix.
What Went Wrong and Why
1. Too vague: The output is generic, surface-level, could apply to anything. Fix: Add specifics. Name the audience, the context, the constraints. Show an example of what "good" looks like.
2. Wrong format: You wanted bullet points, you got paragraphs. You wanted JSON, you got prose with a JSON block buried in it. Fix: Be explicit about format. Use the "output first" technique from Lesson 5. Say what you DON'T want.
3. Wrong tone: Too formal, too casual, too verbose, too terse. Fix: Describe tone with specific comparisons ("write like a Slack message to a colleague, not a formal email"). Provide a style example.
4. Hallucination: The AI stated something confidently that's factually wrong. Fix: Ask it to cite sources. Add "if you're not sure, say so." For critical facts, ask it to flag confidence levels.
5. Ignored instructions: You gave clear rules and the AI broke them. Fix: Move critical instructions to the top. Repeat key constraints. Use emphasis: "IMPORTANT:" or "NEVER:" for non-negotiable rules.
The Debug Loop
4-Step Debug Process
1. IDENTIFY: What specifically is wrong? Name the gap between expected and actual output.
2. DIAGNOSE: Which failure mode is it? (vague, format, tone, hallucination, ignored instruction)
3. HYPOTHESIZE: What in the prompt caused this? (missing context, ambiguous instruction, wrong placement)
4. FIX: Make ONE targeted change to the prompt. Test again. Repeat.
The critical rule: change one thing at a time. If you rewrite the entire prompt, you won't know what fixed it (or what broke something else).
Ask the AI to Debug Itself
This is a powerful meta-technique. When output is wrong, ask the AI to explain its reasoning.
Self-Debug Prompt
"Your previous response didn't match what I needed. Here's what was wrong: [specific issue]. Before trying again, explain: what did you interpret my instructions to mean? Where did you make assumptions? Then give me a revised response addressing those gaps."
This surfaces misinterpretations you didn't know existed. The AI might reveal it understood "brief" to mean 50 words when you meant 200, or it focused on the wrong part of a multi-part instruction.
Keep a Failure Log
When a prompt fails and you fix it, write down what went wrong and what fixed it. Over time, you'll build an intuition for writing good prompts the first time. You'll also spot your personal patterns — maybe you consistently forget to specify format, or you tend to write prompts that are too short on context.
This lesson is for Pro members
Unlock all 520+ lessons across 52 courses with Academy Pro.
Already a member? Sign in to access your lessons.