Meet Claude
Who Claude is, how it thinks, the model lineup, pricing, and your first API call
Who Is Claude?
Claude is an AI assistant made by Anthropic — a company founded in 2021 by former OpenAI researchers who believed AI safety should be a first-class engineering priority, not an afterthought. Where most AI labs optimize for impressive demos, Anthropic optimizes for models you can actually trust.
The philosophy behind Claude is called Constitutional AI. Instead of training a model purely on human preference ratings (where annotators might reward confident-sounding wrong answers), Anthropic gave Claude a set of explicit principles — like a constitution — and trained it to reason against those principles. The model evaluates its own outputs: "Does this response follow the principles? If not, how should I revise it?" This self-evaluation loop runs thousands of times during training, shaping how Claude thinks — not just what it says.
The practical result is an AI that behaves differently from its competitors in three specific ways: it admits uncertainty instead of fabricating answers, it pushes back when asked to do something harmful instead of finding loopholes, and it maintains consistent reasoning across long, complex conversations instead of losing coherence. These aren't marketing claims — they're measurable properties that show up in benchmarks and real-world use.
Constitutional AI — How It Actually Works
Most AI models are trained with RLHF (Reinforcement Learning from Human Feedback) — humans rate outputs as good or bad, and the model learns to produce highly-rated responses. This works, but it has a flaw: it optimizes for what sounds right to a human rater, which is not the same as what is actually right. A confident wrong answer often scores higher than an honest "I'm not sure."
Constitutional AI adds a layer. After the initial training, Claude is given a set of principles (the "constitution") and asked to critique its own outputs against those principles. Here is a simplified version of the loop:
# Simplified Constitutional AI training loop
# (This is conceptual — actual training uses billions of examples)
principles = [
"Choose the response that is most helpful to the human.",
"Choose the response that is most honest and truthful.",
"Choose the response that is least likely to cause harm.",
"If uncertain, acknowledge the uncertainty rather than guessing.",
"Do not help with illegal or dangerous activities.",
]
for prompt in training_data:
# Step 1: Generate initial response
response = model.generate(prompt)
# Step 2: Self-critique against principles
critique = model.evaluate(response, principles)
# "This response sounds confident but I'm not actually sure
# about the third claim. Principle 4 says I should
# acknowledge uncertainty."
# Step 3: Revise based on critique
revised = model.revise(response, critique)
# The revised response now says "I believe X and Y,
# but I'm not certain about Z — you may want to verify."
# Step 4: Train on the revised (better) response
model.learn(prompt, revised)
This self-critique loop runs during training, not during your conversations. By the time you use Claude, these principles are baked into the model's weights. The result: Claude's default behavior is to be honest, helpful, and harmless — without needing to be told.
Claude vs. ChatGPT vs. Gemini
All three are powerful. But they are built with different priorities, trained with different methods, and optimized for different outcomes. Understanding the differences helps you choose the right tool for each job — or use them in combination.
None of this makes Claude universally "better." The honest answer: use the right model for the right job. Claude's edge is trustworthiness under pressure, long-context fidelity, and agentic coding. ChatGPT has ecosystem breadth. Gemini has Google integration. Many teams use all three.
The Claude Model Lineup (2025)
Anthropic offers Claude in three tiers. Think of it like choosing tools — more power isn't always better if you're paying for it on every request. The key is matching model capability to task complexity.
In Claude.ai (the web interface), Sonnet is the default. You can switch to Opus for harder tasks. Via the API, you specify exactly which model to use — which is what we'll learn next.
The HHH Framework
Everything about Claude traces back to three words. Anthropic calls it the HHH framework — and it is not just marketing. These properties are baked into how the model is trained and evaluated. Every response Claude generates is implicitly measured against all three.
These three properties are in tension with each other. A maximally helpful model might give dangerous advice. A maximally harmless model might refuse everything. A maximally honest model might be blunt to the point of unhelpfulness. The art of Constitutional AI is training Claude to balance all three simultaneously — which is why it sometimes says "I can help with that, but here's an important caveat" instead of blindly complying or blindly refusing.
Your First Claude API Call
You do not need the API to use Claude — Claude.ai works with no setup. But the API is where the real power is: you control the model, the system prompt, the temperature, the tools, and every parameter. Here is the simplest possible API call using curl and then Python:
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Explain what an API is in one paragraph."}
]
}'
import anthropic
# pip install anthropic
# export ANTHROPIC_API_KEY="sk-ant-..."
client = anthropic.Anthropic() # reads API key from environment
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain what an API is in one paragraph."}
]
)
print(message.content[0].text)
# "An API (Application Programming Interface) is a set of rules
# and protocols that allows different software applications to
# communicate with each other..."
model — which Claude model to use (see tier table above)
max_tokens — maximum output length (required)
messages — the conversation (user and assistant turns)
system — system prompt (Lesson 4)
temperature — creativity dial (Lesson 3)
tools — function calling (Lesson 8)
This is the foundation. Every feature in this course — system prompts, temperature, tool use, agents — builds on this exact API call. You add parameters, but the shape stays the same: choose a model, send messages, get a response.
How to Get Your API Key
To make API calls, you need a key from the Anthropic Console. Here is the process:
sk-ant-) — you will only see it onceexport ANTHROPIC_API_KEY="sk-ant-..."Choosing the Right Model — A Decision Framework
The most common beginner mistake is defaulting to the most powerful model for everything. That is like driving a semi truck to get groceries. Here is a practical framework:
The task requires sustained reasoning over long inputs (100K+ tokens), the cost of a wrong answer is high (legal analysis, medical triage, financial modeling), you need an AI agent that runs autonomously for hours, or the problem genuinely requires the smartest model available.
You need a balance of quality and speed (most coding, writing, analysis, conversation). Sonnet handles 90% of real-world tasks at 1/5 the cost of Opus. Start here unless you have a reason not to.
Speed and cost dominate. Classification (is this email spam?), extraction (pull the date from this invoice), summarization (condense this to 3 bullets), or any pipeline processing thousands of items per hour.
def choose_model(task_type: str, input_length: int) -> str:
"""Route to the right Claude model based on the task."""
if task_type in ("classification", "extraction", "summarization"):
return "claude-haiku-4-5-20251001" # fast + cheap
elif input_length > 100_000 or task_type in ("legal", "agent", "research"):
return "claude-opus-4-6" # maximum capability
else:
return "claude-sonnet-4-6" # default sweet spot
# Examples:
choose_model("classification", 50) # → haiku (fast, cheap)
choose_model("coding", 2000) # → sonnet (balanced)
choose_model("legal", 150000) # → opus (maximum quality)