Few-Shot Mastery
Teach Claude any pattern with examples — from classification to structured extraction, with production code
What Is Few-Shot Prompting?
Few-shot prompting means giving Claude a few examples of the input-output pattern you want, then letting it generalize to new inputs. It is teaching by showing rather than explaining — and it works remarkably well because Claude can infer complex patterns from just 2-3 examples.
This is distinct from zero-shot prompting (no examples, just instructions) and one-shot prompting (a single example). For most tasks, 3-5 examples hits the sweet spot: enough to disambiguate the pattern without wasting context tokens.
Zero-Shot vs. Few-Shot — Side by Side
Here is the same task done both ways. Notice how few-shot produces more consistent, predictable output:
Classify this review's sentiment
as positive, negative, or neutral.
Review: "Decent food but the
service was painfully slow."
# Claude might say:
# "Negative" or "Mixed" or
# "The sentiment is primarily
# negative with a positive
# element..." (verbose)
Review: "Loved every minute!"
Sentiment: Positive
Review: "Worst meal I've ever had."
Sentiment: Negative
Review: "It was fine, nothing special."
Sentiment: Neutral
Review: "Decent food but the
service was painfully slow."
Sentiment: Negative
# Consistent one-word answer
# matching the example format
The few-shot version produces exactly the format you showed — a single word. The zero-shot version might give a paragraph of analysis. Few-shot teaches both the logic and the format.
Few-Shot in the API
In the Claude API, few-shot examples go in the messages array as alternating user/assistant turns. Claude sees them as a conversation history and continues the pattern:
import anthropic
client = anthropic.Anthropic()
def classify_sentiment(review: str) -> str:
"""Classify a review as Positive, Negative, or Neutral."""
response = client.messages.create(
model="claude-haiku-4-5-20251001", # haiku for classification
max_tokens=10, # we only need one word
temperature=0, # deterministic
system="Classify the sentiment of each review as exactly one word: Positive, Negative, or Neutral.",
messages=[
# Few-shot examples
{"role": "user", "content": "The movie was absolutely fantastic!"},
{"role": "assistant", "content": "Positive"},
{"role": "user", "content": "I wasted two hours on this terrible film."},
{"role": "assistant", "content": "Negative"},
{"role": "user", "content": "It was okay, nothing special."},
{"role": "assistant", "content": "Neutral"},
# The real input
{"role": "user", "content": review},
]
)
return response.content[0].text.strip()
# Use it
print(classify_sentiment("Great acting but terrible plot")) # → Negative
print(classify_sentiment("A masterpiece of modern cinema")) # → Positive
print(classify_sentiment("Meh")) # → Neutral
This lesson is for Pro members
Unlock all 520+ lessons across 52 courses with Academy Pro.
Already a member? Sign in to access your lessons.