Throughout this book, we’ve examined individual patterns in isolation. In the real world, effective agent systems combine multiple patterns into composite architectures tailored to specific use cases. This chapter presents case studies showing how the patterns come together.
Given a GitHub issue description, automatically navigate a codebase, understand the problem, and produce a working fix.
This is one of the most successful applications of agentic AI. Anthropic’s coding agent, which achieves state-of-the-art results on the SWE-bench Verified benchmark, uses a relatively simple architecture:
┌─────────────────────────────────────────┐
│ Coding Agent Architecture │
│ │
│ ┌─────────┐ ┌───────────────────┐ │
│ │ Issue │───►│ Agent Loop │ │
│ │ Input │ │ (ReAct Pattern) │ │
│ └─────────┘ └───────┬───────────┘ │
│ │ │
│ ┌──────────┼──────────┐ │
│ │ │ │ │
│ ┌────▼───┐ ┌───▼────┐ ┌───▼──┐ │
│ │ Search │ │ Read │ │ Edit │ │
│ │ Code │ │ File │ │ File │ │
│ └────────┘ └────────┘ └──────┘ │
│ │ │ │ │
│ ┌────▼───┐ ┌───▼────┐ ┌───▼──┐ │
│ │ List │ │ Run │ │ Bash │ │
│ │ Dir │ │ Tests │ │ Cmd │ │
│ └────────┘ └────────┘ └──────┘ │
│ │
└─────────────────────────────────────────┘
| Pattern | How It’s Used |
|---|---|
| Agent Loop | Core execution: reason about the issue, explore code, make changes, test |
| Tool Use | File operations, code search, test execution, bash commands |
| Planning | Implicit — the agent decides what files to examine and in what order |
| Reflection | Agent runs tests and uses failures to revise its changes |
| ACI Design | Critical — tools use absolute paths, return focused results |
“We actually spent more time optimizing our tools than the overall prompt.” — Anthropic
The tool interface design (Chapter 13) was the single most impactful investment. When they switched from relative to absolute file paths, tool-use errors dropped to near zero.
# See code/case_study_coding_agent.py for the full implementation
def coding_agent(issue_description, repo_path, llm):
"""A simplified coding agent."""
tools = [
search_codebase, # Find relevant files
read_file, # Read file contents
edit_file, # Make changes
run_tests, # Execute test suite
run_command, # Run bash commands
list_directory, # Explore file structure
]
system = (
f"You are a senior software engineer. Fix the following issue "
f"in the repository at {repo_path}.\n\n"
f"Process:\n"
f"1. Understand the issue\n"
f"2. Find the relevant code\n"
f"3. Make the fix\n"
f"4. Run tests to verify\n"
f"5. If tests fail, iterate\n\n"
f"Always use absolute file paths."
)
return agent_loop(
goal=issue_description,
tools=tools,
llm=llm,
system=system,
max_iterations=30
)
Handle customer inquiries with access to order history, product catalog, knowledge base, and the ability to take actions (issue refunds, update tickets).
┌────────────────────────────────────────────┐
│ Customer Support Agent Architecture │
│ │
│ ┌──────────┐ ┌───────────────────┐ │
│ │ Customer │───►│ Router │ │
│ │ Message │ │ (Classification) │ │
│ └──────────┘ └──┬──────┬──────┬──┘ │
│ │ │ │ │
│ ┌──────▼┐ ┌──▼───┐ ┌▼──────┐ │
│ │General│ │Refund│ │ Tech │ │
│ │ Q&A │ │Agent │ │Support│ │
│ └───┬───┘ └──┬───┘ └───┬───┘ │
│ │ │ │ │
│ ┌───▼─────────▼─────────▼───┐ │
│ │ Shared Tools │ │
│ │ (KB, Orders, Actions) │ │
│ └───────────────────────────┘ │
└────────────────────────────────────────────┘
| Pattern | How It’s Used |
|---|---|
| Routing | Classify inquiry type and direct to specialist |
| Tool Use | Access customer data, knowledge base, take actions |
| Guardrails | Parallel safety screening of messages |
| Human-in-the-Loop | Escalation for complex or sensitive cases |
| Memory | Remember customer context within the session |
Customer support is a natural fit for agents because:
Given a research topic, produce a comprehensive, well-sourced report by searching multiple information sources and synthesizing findings.
┌─────────────────────────────────────────────┐
│ Research Assistant Architecture │
│ │
│ ┌──────────┐ ┌───────────────────────┐ │
│ │ Research │───►│ Orchestrator │ │
│ │ Question │ │ (Plan Research) │ │
│ └──────────┘ └──┬──────┬──────┬──────┘ │
│ │ │ │ │
│ ┌──────▼┐ ┌──▼───┐ ┌▼──────┐ │
│ │ Web │ │ArXiv │ │Domain │ │
│ │Search │ │Search│ │Search │ │
│ │Worker │ │Worker│ │Worker │ │
│ └───┬───┘ └──┬───┘ └───┬───┘ │
│ │ │ │ │
│ ┌───▼─────────▼─────────▼───┐ │
│ │ Synthesizer │ │
│ │ (Combine + Evaluate) │ │
│ └──────────┬────────────────┘ │
│ │ │
│ ┌──────────▼────────────────┐ │
│ │ Fact-Checker │ │
│ │ (Verify Claims) │ │
│ └───────────────────────────┘ │
└─────────────────────────────────────────────┘
| Pattern | How It’s Used |
|---|---|
| Orchestrator-Workers | Orchestrator plans research, workers search different sources |
| Parallelization | Multiple search workers run simultaneously |
| Evaluator-Optimizer | Fact-checker evaluates and the writer revises |
| Tool Use | Web search, academic search, document retrieval |
| Planning | Orchestrator decides what subtopics to investigate |
Automate data extraction, transformation, and loading (ETL) with an agent that can handle schema changes, data quality issues, and pipeline failures.
┌───────────────────────────────────────────┐
│ Data Pipeline Agent Architecture │
│ │
│ ┌──────────┐ ┌─────────────────────┐ │
│ │ Pipeline │──►│ Schema Analyzer │ │
│ │ Config │ │ (Understand data) │ │
│ └──────────┘ └────────┬────────────┘ │
│ │ │
│ ┌────────▼────────────┐ │
│ │ Transform Planner │ │
│ │ (Plan ETL steps) │ │
│ └────────┬────────────┘ │
│ │ │
│ ┌───────────┼────────────┐ │
│ │ │ │ │
│ ┌────▼───┐ ┌────▼────┐ ┌─────▼──┐ │
│ │Extract │ │Transform│ │ Load │ │
│ │ Agent │ │ Agent │ │ Agent │ │
│ └────────┘ └─────────┘ └────────┘ │
│ │ │
│ ┌────────▼────────────┐ │
│ │ Quality Validator │ │
│ │(Check data quality) │ │
│ └────────────────────┘ │
└───────────────────────────────────────────┘
| Pattern | How It’s Used |
|---|---|
| Prompt Chaining | Fixed E → T → L pipeline structure |
| Reflection | Quality validator checks output and triggers fixes |
| Tool Use | Database queries, file I/O, API calls |
| Planning | Transform planner generates SQL/Python transformations |
When designing a new agent system, use this decision tree:
Is the task simple enough for a single LLM call?
├── Yes → Use a single call with good prompting
└── No → Can the steps be defined in advance?
├── Yes → Use Prompt Chaining (Ch. 7)
│ └── Do different inputs need different handling?
│ └── Yes → Add Routing (Ch. 8)
└── No → Does the task need multiple perspectives?
├── Yes → Use Multi-Agent Collaboration (Ch. 6)
└── No → Does the task have clear quality criteria?
├── Yes → Use Evaluator-Optimizer (Ch. 11)
└── No → Use an Agent Loop with Planning (Ch. 2, 5)
└── Are subtasks independent?
└── Yes → Add Parallelization (Ch. 9)
Building a multi-agent orchestration system when a simple prompt chain would suffice. Always start with the simplest solution.
Too many agents with unclear responsibilities, leading to redundant work and conflicting outputs.
Agents that keep trying to improve their output without meaningful progress. Always set hard limits on iterations.
Putting everything into the LLM context instead of using tools for targeted retrieval. This leads to poor performance and high costs.
Assuming agent outputs are correct without verification. Always build in checks, especially for actions that affect external systems.
Using the same model for every task. Route simple tasks to cheaper models and reserve expensive models for tasks that truly need them.
The agentic AI landscape is evolving at extraordinary speed. Several trends are likely to shape the near future:
The most successful agent implementations share three principles:
The patterns in this book are building blocks, not blueprints. The right architecture for your application will combine and adapt these patterns based on your specific requirements, constraints, and goals.
Build the right system for your needs. Test relentlessly. Iterate on your tools. And ship.
Navigation: