📚Academy
likeone
online

The Agent Loop

Every agent runs the same fundamental cycle: Perceive, Think, Act, Observe, Learn. This is the pattern behind every autonomous AI system — from Claude Code to self-driving cars. Here is how it works, in theory and in code.

The Five Steps

Click any node on the loop to explore it

The Loop

This is the heartbeat of every AI agent. Unlike a chatbot that responds once and stops, an agent cycles through these five steps continuously until its goal is achieved. Click a node above to dive in.

The Loop in Code

Here is a minimal but complete agent loop in Python. Every agent framework (LangChain, CrewAI, Claude Agent SDK) implements this same pattern under the hood:

# agent_loop.py — The fundamental agent pattern
import anthropic

client = anthropic.Anthropic()

def agent_loop(goal, tools, max_turns=10):
  memory = []  # Conversation history = agent memory
  turn = 0

  while turn < max_turns:
    # STEP 1: PERCEIVE — gather current state
    messages = memory + [{
      "role": "user",
      "content": goal if turn == 0 else "Continue working toward the goal."
    }]

    # STEP 2: THINK — LLM reasons about what to do
    response = client.messages.create(
      model="claude-sonnet-4-6",
      max_tokens=1024,
      system=f"You are an agent. Goal: {goal}",
      tools=tools,
      messages=messages
    )

    # STEP 3: ACT — execute any tool calls
    if response.stop_reason == "tool_use":
      for block in response.content:
        if block.type == "tool_use":
          # STEP 4: OBSERVE — run the tool and get result
          result = execute_tool(block.name, block.input)

          # STEP 5: LEARN — store result in memory
          memory.append({
            "role": "assistant",
            "content": response.content
          })
          memory.append({
            "role": "user",
            "content": [{
              "type": "tool_result",
              "tool_use_id": block.id,
              "content": result
            }]
          })
    else:
      # No tool call = agent is done
      return response.content[0].text

    turn += 1

  return "Max turns reached without completion"
Line 10 PERCEIVE — The agent gathers its current state: the goal, its memory of past actions, and the current turn.
Line 16 THINK — Claude receives everything (goal + memory + tools) and reasons about what to do next. This is the intelligence.
Line 25 ACT — If Claude decided to use a tool, execute_tool() runs it for real — reading files, calling APIs, querying databases.
Line 28 OBSERVE — The tool result is captured. Did the API return data? Did the file write succeed? The agent sees what happened.
Line 29 LEARN — The tool result is appended to memory. On the next loop, Claude sees everything that happened and can build on it.

Why the Loop Matters

A chatbot calls the LLM once and returns the result. An agent calls the LLM in a loop, feeding each result back as context for the next decision. This is the difference between "answer a question" and "solve a problem."

Chatbot (1 call)
User: "What is the weather?"
AI: "I cannot check the weather."
Done. No tools. No loop.
Agent (3 loops)
Loop 1: Call weather API → get forecast
Loop 2: Check calendar → find outdoor meeting
Loop 3: Send email → "Bring an umbrella"
Problem solved autonomously.

The Stop Condition

Every loop needs a way to stop. Without a stop condition, your agent runs forever. There are three ways agents decide to stop:

1. Goal achieved — The LLM decides the task is complete and responds with text instead of a tool call. In the code above, this is the else branch on line 37 — when stop_reason is not "tool_use", the loop exits.
2. Max turns reached — Safety limit. The max_turns=10 parameter prevents runaway agents. If the agent cannot solve the problem in 10 loops, something is wrong — stop and report.
3. Unrecoverable error — A tool fails and there is no fallback. A good agent catches the error, logs what happened, and returns a useful message instead of crashing silently.

Real-World Agent Loops

The same loop pattern powers vastly different systems. The only thing that changes is what tools are available and what the goal is:

CLAUDE CODE Perceive: read user request + codebase. Think: plan changes. Act: edit files, run tests. Observe: did tests pass? Learn: remember what worked. Loop until all tests green.
CUSTOMER SUPPORT Perceive: read ticket. Think: classify intent. Act: search knowledge base. Observe: is the answer relevant? Learn: draft response. Loop until resolution or escalation.
DATA PIPELINE Perceive: new data arrives. Think: what transformations needed? Act: query database, clean data. Observe: are results valid? Learn: log metrics. Loop until pipeline complete.

Common Loop Failures

Understanding how loops break makes you a better agent builder:

Infinite loop — Agent keeps calling tools but never makes progress toward the goal. Fix: max_turns limit + progress detection. If the last 3 tool results are identical, stop.
Context overflow — Memory grows so large the LLM cannot process it. Fix: Summarize old memory. Keep recent results full, compress older ones. Production agents use sliding windows.
Wrong tool selection — Agent calls the database when it needs web search, or vice versa. Fix: Clear tool descriptions in the system prompt. Each tool should say exactly what it does and when to use it.
Academy
Built with soul — likeone.ai