📚Academy
likeone
online

Debugging Bad Outputs

When AI gets it wrong, the problem is almost always in the prompt. Here's how to find it.

What You'll Learn

  • The 5 most common reasons AI output goes wrong
  • A systematic debugging framework for prompts
  • How to diagnose vague, wrong, or off-tone responses
  • Iterative refinement: making prompts better fast

Bad Output Is Feedback, Not Failure

When the AI gives you something wrong, it's telling you something about your prompt. Maybe the instructions were ambiguous. Maybe the context was missing. Maybe you assumed the AI knew something it didn't. Every bad output is a clue pointing to a specific fix.

What Went Wrong and Why

1. Too vague: The output is generic, surface-level, could apply to anything. Fix: Add specifics. Name the audience, the context, the constraints. Show an example of what "good" looks like.

2. Wrong format: You wanted bullet points, you got paragraphs. You wanted JSON, you got prose with a JSON block buried in it. Fix: Be explicit about format. Use the "output first" technique from Lesson 5. Say what you DON'T want.

3. Wrong tone: Too formal, too casual, too verbose, too terse. Fix: Describe tone with specific comparisons ("write like a Slack message to a colleague, not a formal email"). Provide a style example.

4. Hallucination: The AI stated something confidently that's factually wrong. Fix: Ask it to cite sources. Add "if you're not sure, say so." For critical facts, ask it to flag confidence levels.

5. Ignored instructions: You gave clear rules and the AI broke them. Fix: Move critical instructions to the top. Repeat key constraints. Use emphasis: "IMPORTANT:" or "NEVER:" for non-negotiable rules.

🔒

This lesson is for Pro members

Unlock all 300+ lessons across 30 courses with Academy Pro. Founding members get 90% off — forever.

Already a member? Sign in to access your lessons.

Academy
Built with soul — likeone.ai