📚Academy
likeone
online

Context Window Mastery

The context window is your workspace. Learn to use every square inch of it.

What You'll Learn

  • What the context window is and why it has limits
  • How to prioritize what goes in (and what stays out)
  • Compression techniques for fitting more signal in less space
  • Long-context strategies for big documents and codebases

Your Context Window Is Not Infinite

Every AI model has a context window — the total amount of text it can "see" at once (your prompt + its response). Claude's can handle up to 200K tokens, GPT-4 up to 128K. That sounds like a lot, but it fills fast when you're working with documents, code, or long conversations.

More importantly: just because a model can see 200K tokens doesn't mean it pays equal attention to all of them. Information at the beginning and end of the context gets more attention than the middle. This is called the "lost in the middle" effect, and it matters for how you structure your prompts.

Front-Load What Matters

Put your most important instructions and context at the very beginning of your prompt. Critical rules, key constraints, and the primary task should come first. Supporting details, reference material, and nice-to-haves go later.

Good Structure

1. System instructions (who you are, critical rules) 2. The specific task (what to do right now) 3. Key constraints (must-haves and must-nots) 4. Reference material (supporting context) 5. Examples (if needed) 6. The actual input to process

Compress Without Losing Signal

When you need to fit a lot of context, compression is your friend. But bad compression loses the important parts. Here's how to compress well.

Summarize first: Before pasting a long document, ask AI to summarize it. Then use the summary as context for your actual task.

Extract relevant sections: Don't paste an entire 50-page document. Pull out the 3 sections that actually matter for your question.

Use structured references: Instead of pasting full files, paste function signatures, class names, and key logic blocks. The AI can reason about architecture without seeing every line.

The Rolling Context Technique

For long tasks that span many turns in a conversation, context accumulates. Old messages eat up space. The rolling context technique keeps things fresh.

Rolling Context Prompt

"Before we continue, summarize our work so far in 200 words: what we've decided, what's been built, and what's left to do. I'll use this summary to continue in a fresh context."

This is essentially creating a checkpoint. You capture the essential state, start fresh, and lose nothing important.

🔒

This lesson is for Pro members

Unlock all 520+ lessons across 52 courses with Academy Pro.

Already a member? Sign in to access your lessons.

Academy
Built with soul — likeone.ai