What Are Claude Code Agents?
Claude Code operates as an agentic AI system — it doesn't just answer questions, it autonomously plans and executes multi-step coding tasks. When you give Claude Code a complex instruction like "refactor the authentication system to use JWT tokens," it breaks the work down, reads relevant files, makes edits, runs tests, and iterates until the task is complete.
The key innovation is subagents: Claude Code can spawn independent child processes (via the Task tool) that each get their own context window and tool access. This enables parallel execution, context isolation, and specialization — the same architectural patterns used in distributed systems, applied to AI coding.
Without agents, Claude Code would be limited to a single conversation thread with one context window. Subagents let it tackle ambitious tasks — like migrating an entire codebase, running a comprehensive test suite, or researching a complex question — by dividing work across specialized child processes that report results back to the parent.
Agent Architecture: How It Works
Claude Code's agent system has three layers:
│
├── Subagent A (Explore: codebase research)
│ └── Returns findings to parent
├── Subagent B (Bash: run tests in parallel)
│ └── Returns test results
├── Subagent C (Plan: design architecture)
│ └── Returns implementation plan
└── Primary agent synthesizes results & makes edits
Key Architectural Concepts
- Context isolation — Each subagent has its own context window, preventing the parent from being overwhelmed by large search results or verbose output
- Parallel execution — Multiple subagents can run simultaneously, dramatically speeding up tasks like searching across multiple files or running independent tests
- Tool specialization — Each agent type has access to specific tools. An Explore agent can read files but not edit them. A Bash agent can run commands but not search the web.
- Result aggregation — Subagents return a single summary message to the parent, keeping the primary context clean and focused
- Background execution — Subagents can run in the background while the primary agent continues working on other tasks
Built-in Agent Types
Claude Code ships with several specialized subagent types, each designed for a specific class of work:
Plugins can register additional agent types. For example, the feature-dev plugin adds specialized agents for code review, code exploration, and architecture design:
The Task Tool: Spawning Subagents
The Task tool is how Claude Code spawns subagents. When the primary agent encounters a subtask that benefits from isolation or parallelism, it uses the Task tool to delegate:
// How Claude Code internally spawns a subagent
Task({
subagent_type: "Explore",
description: "Find authentication middleware",
prompt: "Search the codebase for authentication middleware. Look for JWT validation, session management, and auth guards. Report file paths and key functions."
})
Task Tool Parameters
subagent_type— Which agent type to use (Explore, Bash, Plan, general-purpose, or custom plugin agents)description— Short 3-5 word description of the taskprompt— Detailed instructions for the subagentrun_in_background— Settrueto run without blocking the parent agentmodel— Override the model (e.g.,"haiku"for fast, cheap tasks)isolation— Set"worktree"to give the agent an isolated copy of the repositorymax_turns— Limit the number of API round-tripsmode— Permission mode (e.g.,"plan"to require approval,"bypassPermissions"for CI)
Parallel Subagent Execution
One of the most powerful patterns is spawning multiple subagents simultaneously:
// Claude Code spawns 3 subagents in parallel
// All run simultaneously, results returned as they complete
Task({
subagent_type: "Explore",
prompt: "Find all API route handlers"
})
Task({
subagent_type: "Explore",
prompt: "Find all database migration files"
})
Task({
subagent_type: "Bash",
prompt: "Run the test suite and report failures"
})
Subagents are most valuable when: (1) searching across a large codebase, (2) running tasks that produce verbose output, (3) executing independent tasks in parallel, or (4) isolating experimental changes in a worktree. For simple file reads or targeted searches, direct tool calls are faster.
Creating Custom Agents in Plugins
You can create custom agent types through Claude Code plugins. Custom agents are defined as Markdown files with YAML frontmatter in your plugin's agents/ directory.
Agent File Structure
my-plugin/
plugin.json
agents/
test-runner.md # Custom agent definition
code-reviewer.md
deployment-helper.md
Agent Definition Format
---
name: test-runner
description: |
Use this agent after writing code to run tests and validate changes.
Triggers when: code has been modified, user says "run tests", or after
feature implementation.
tools:
- Bash
- Read
- Glob
- Grep
model: haiku # Optional: cheaper model for simple tasks
---
You are a test execution specialist. When invoked:
1. Detect the project's test framework (jest, pytest, go test, etc.)
2. Run the full test suite
3. If tests fail, analyze the failure and suggest fixes
4. Report a summary: passed, failed, skipped counts
Always run tests from the project root. If no test command is obvious,
check package.json scripts, Makefile, or common test file patterns.
Frontmatter Fields
name(required) — Unique identifier used withsubagent_typein the Task tooldescription(required) — Tells Claude when to use this agent. The primary agent reads this to decide which subagent to spawn. Write it as trigger conditions, not just a summary.tools— List of tools the agent can access. Restricting tools prevents the agent from taking unintended actions.model— Override the model. Usehaikufor fast tasks,opusfor complex reasoning.
The agent's description field is critical — it's how Claude Code decides when to use your agent. Write it with explicit trigger phrases: "Use this agent when the user says X" or "Trigger when Y condition exists." Without clear triggers, your agent may never be invoked automatically.
Agent System Prompt (Body Content)
The Markdown body below the frontmatter becomes the agent's system prompt. This is the instruction set that controls how the subagent behaves. Structure it clearly:
- Role statement — What the agent is ("You are a test execution specialist")
- Step-by-step workflow — What to do when invoked
- Output format — How to structure the response returned to the parent
- Constraints — What not to do (e.g., "never modify code, only report findings")
The Claude Code Agent SDK
For building standalone applications powered by Claude Code's agent capabilities, Anthropic provides the Claude Code SDK (@anthropic-ai/claude-code).
Installation
$ npm install @anthropic-ai/claude-code
Basic Usage (TypeScript)
import { claude } from '@anthropic-ai/claude-code';
// Run Claude Code programmatically
const result = await claude({
prompt: "Fix the failing test in src/auth.test.ts",
workingDirectory: "/path/to/project",
options: {
allowedTools: ["Read", "Edit", "Bash", "Glob", "Grep"],
maxTurns: 20,
}
});
console.log(result.messages);
Python SDK
from anthropic_claude_code import claude
# Run Claude Code from Python
result = claude(
prompt="Add comprehensive tests for the User model",
working_directory="/path/to/project",
allowed_tools=["Read", "Edit", "Bash"],
max_turns=15
)
SDK Use Cases
- CI/CD Integration — Run Claude Code in your CI pipeline to auto-fix linting errors, generate missing tests, or review PRs
- Code Review Bots — Build a GitHub bot that reviews every PR with Claude Code's agent capabilities
- Migration Scripts — Automate large-scale codebase migrations with programmatic Claude Code invocations
- Custom IDE Extensions — Build VS Code extensions that leverage Claude Code's autonomous capabilities
- Batch Processing — Process multiple files or repositories in parallel with orchestrated agent sessions
Multi-Agent Patterns & Workflows
Pattern 1: Research-then-Act
The most common pattern. Spawn Explore subagents to gather information, then use the results to make informed changes.
Step 1: Research (parallel subagents)
Explore agent → "Find all auth-related files"
Explore agent → "Find all API route definitions"
Explore agent → "Check test coverage for auth module"
Step 2: Plan (based on research results)
Plan agent → "Design JWT migration based on these findings..."
Step 3: Execute (primary agent makes changes)
Primary agent writes code using full context
Pattern 2: Worktree Isolation
For experimental or risky changes, spawn a subagent in an isolated git worktree. If the changes don't work out, the worktree is discarded automatically.
Task({
subagent_type: "general-purpose",
prompt: "Try refactoring the auth system to use Passport.js. Run tests to verify.",
isolation: "worktree" // isolated git copy
})
Pattern 3: Background Monitoring
Spawn long-running subagents in the background while continuing to work interactively.
Task({
subagent_type: "Bash",
prompt: "Run the full integration test suite",
run_in_background: true
})
// Primary agent continues working...
// Gets notified when tests complete
Pattern 4: Fan-out / Fan-in
Distribute work across multiple subagents, then aggregate results. Ideal for large codebases.
// Fan-out: search across modules in parallel
Task({ prompt: "Search src/api/ for security issues" })
Task({ prompt: "Search src/auth/ for security issues" })
Task({ prompt: "Search src/database/ for security issues" })
// Fan-in: primary agent combines all findings into one report
Best Practices
Writing Effective Agent Prompts
- Be specific about output format — Tell the subagent exactly what to return: "Report file paths, function names, and a one-line summary for each finding"
- Set clear boundaries — Specify what the agent should and shouldn't do: "Only search, do not modify any files"
- Provide context — Include relevant information the subagent needs: file paths, error messages, or constraints
- Use the right agent type — Don't use a general-purpose agent for simple searches; use Explore for speed
Performance Tips
- Parallelize independent tasks — If tasks don't depend on each other, spawn them simultaneously
- Use
model: "haiku"for simple agents — Saves cost and latency for straightforward tasks like running commands or simple searches - Limit
max_turns— Prevent subagents from going on long tangents by capping their API calls - Prefer Explore over general-purpose for search — Explore agents are optimized for fast codebase navigation
- Use background execution wisely — Great for test suites and builds, but don't forget to check results
Security Considerations
- Restrict agent tools to only what's needed. A search agent shouldn't have
EditorBashaccess. - Use
isolation: "worktree"for agents that might make destructive changes - Set
mode: "plan"for agents that need human approval before executing - Never pass secrets in agent prompts — use environment variables instead
Real-World Examples
Example: Feature Development Agent
A common plugin pattern combines multiple agents into a guided feature development workflow:
---
name: feature-dev
description: |
Guided feature development with codebase understanding and architecture
focus. Use when user says "build a feature", "implement", or "add support for".
tools:
- Read
- Edit
- Write
- Bash
- Glob
- Grep
- Task # Can spawn sub-subagents!
---
You are a feature development specialist. Follow this workflow:
1. **Understand**: Use code-explorer subagent to analyze the existing codebase
2. **Design**: Use code-architect subagent to create an implementation plan
3. **Implement**: Write the code following the architectural plan
4. **Verify**: Run tests and use code-reviewer subagent to check quality
5. **Report**: Summarize what was built and any remaining TODOs
Example: Security Audit Agent
---
name: security-audit
description: |
Run a comprehensive security audit. Trigger on "security audit",
"check for vulnerabilities", "OWASP scan", or "security review".
tools:
- Read
- Glob
- Grep
- Bash
---
Scan the codebase for OWASP Top 10 vulnerabilities:
1. SQL injection (look for raw SQL queries, unsanitized inputs)
2. XSS (look for unescaped user content in templates)
3. Authentication issues (hardcoded secrets, weak hashing)
4. SSRF (user-controlled URLs in server-side requests)
5. Dependency vulnerabilities (run npm audit / pip audit)
Report findings with severity (Critical/High/Medium/Low),
file path, line number, and recommended fix.
Frequently Asked Questions
Can subagents spawn their own subagents?
Yes, if the subagent has access to the Task tool. This enables hierarchical agent workflows where a high-level orchestrator delegates to mid-level agents, which further delegate to specialized workers. However, keep nesting shallow (2-3 levels max) to avoid excessive token usage.
How many subagents can run in parallel?
There's no hard limit on concurrent subagents, but each one consumes API tokens and compute resources. In practice, 3-5 parallel subagents is the sweet spot for most tasks. Going beyond 10 rarely provides additional benefit and increases costs significantly.
Do subagents share context with the parent?
No. Each subagent starts with a fresh context window containing only the prompt you provide. It cannot see the parent's conversation history (unless the agent type supports "access to current context"). Results are returned as a single summary message.
What happens if a subagent fails?
The parent agent receives the error or partial results. It can then retry with a different approach, spawn a new subagent with adjusted instructions, or handle the failure gracefully. Subagents don't crash the parent session.
How do I debug subagent behavior?
Use run_in_background: true and check the output file for the full transcript. You can also use max_turns to limit execution and inspect intermediate steps. For plugin agents, test them interactively before deploying.
Are agents the same as MCP servers?
No. Agents and MCP servers serve different purposes. Agents are autonomous AI processes that plan and execute tasks. MCP servers provide tools and data sources that agents (and the primary session) can use. Think of agents as workers and MCP servers as toolboxes.