Claude Code Agents & Subagents: The Complete Guide

Everything you need to know about Claude Code's agent architecture, subagent spawning, the Agent SDK, and building autonomous multi-agent coding workflows.

Updated Feb 2026 15 min read Intermediate

What Are Claude Code Agents?

Claude Code operates as an agentic AI system — it doesn't just answer questions, it autonomously plans and executes multi-step coding tasks. When you give Claude Code a complex instruction like "refactor the authentication system to use JWT tokens," it breaks the work down, reads relevant files, makes edits, runs tests, and iterates until the task is complete.

The key innovation is subagents: Claude Code can spawn independent child processes (via the Task tool) that each get their own context window and tool access. This enables parallel execution, context isolation, and specialization — the same architectural patterns used in distributed systems, applied to AI coding.

Why agents matter

Without agents, Claude Code would be limited to a single conversation thread with one context window. Subagents let it tackle ambitious tasks — like migrating an entire codebase, running a comprehensive test suite, or researching a complex question — by dividing work across specialized child processes that report results back to the parent.

Agent Architecture: How It Works

Claude Code's agent system has three layers:

Primary Agent (your main Claude Code session)
  │
  ├── Subagent A (Explore: codebase research)
  │    └── Returns findings to parent
  ├── Subagent B (Bash: run tests in parallel)
  │    └── Returns test results
  ├── Subagent C (Plan: design architecture)
  │    └── Returns implementation plan
  └── Primary agent synthesizes results & makes edits

Key Architectural Concepts

Built-in Agent Types

Claude Code ships with several specialized subagent types, each designed for a specific class of work:

🔍
Explore
Fast codebase exploration. Finds files, searches code, answers questions about architecture.
💻
Bash
Command execution. Runs git, npm, docker, tests, and other terminal operations.
📐
Plan
Architecture design. Creates implementation plans, identifies critical files, considers trade-offs.
🔧
general-purpose
Versatile agent for research, multi-step tasks, and complex questions requiring multiple tool calls.

Plugins can register additional agent types. For example, the feature-dev plugin adds specialized agents for code review, code exploration, and architecture design:

🔎
code-explorer
Deep analysis of existing features by tracing execution paths and mapping architecture.
📝
code-reviewer
Reviews code for bugs, security issues, and quality with confidence-based filtering.
🏗
code-architect
Designs feature architectures with implementation blueprints and build sequences.

The Task Tool: Spawning Subagents

The Task tool is how Claude Code spawns subagents. When the primary agent encounters a subtask that benefits from isolation or parallelism, it uses the Task tool to delegate:

// How Claude Code internally spawns a subagent
Task({
  subagent_type: "Explore",
  description: "Find authentication middleware",
  prompt: "Search the codebase for authentication middleware. Look for JWT validation, session management, and auth guards. Report file paths and key functions."
})

Task Tool Parameters

Parallel Subagent Execution

One of the most powerful patterns is spawning multiple subagents simultaneously:

// Claude Code spawns 3 subagents in parallel
// All run simultaneously, results returned as they complete

Task({
  subagent_type: "Explore",
  prompt: "Find all API route handlers"
})

Task({
  subagent_type: "Explore",
  prompt: "Find all database migration files"
})

Task({
  subagent_type: "Bash",
  prompt: "Run the test suite and report failures"
})
When to use subagents

Subagents are most valuable when: (1) searching across a large codebase, (2) running tasks that produce verbose output, (3) executing independent tasks in parallel, or (4) isolating experimental changes in a worktree. For simple file reads or targeted searches, direct tool calls are faster.

Creating Custom Agents in Plugins

You can create custom agent types through Claude Code plugins. Custom agents are defined as Markdown files with YAML frontmatter in your plugin's agents/ directory.

Agent File Structure

my-plugin/
  plugin.json
  agents/
    test-runner.md       # Custom agent definition
    code-reviewer.md
    deployment-helper.md

Agent Definition Format

---
name: test-runner
description: |
  Use this agent after writing code to run tests and validate changes.
  Triggers when: code has been modified, user says "run tests", or after
  feature implementation.
tools:
  - Bash
  - Read
  - Glob
  - Grep
model: haiku  # Optional: cheaper model for simple tasks
---

You are a test execution specialist. When invoked:

1. Detect the project's test framework (jest, pytest, go test, etc.)
2. Run the full test suite
3. If tests fail, analyze the failure and suggest fixes
4. Report a summary: passed, failed, skipped counts

Always run tests from the project root. If no test command is obvious,
check package.json scripts, Makefile, or common test file patterns.

Frontmatter Fields

Description is the trigger mechanism

The agent's description field is critical — it's how Claude Code decides when to use your agent. Write it with explicit trigger phrases: "Use this agent when the user says X" or "Trigger when Y condition exists." Without clear triggers, your agent may never be invoked automatically.

Agent System Prompt (Body Content)

The Markdown body below the frontmatter becomes the agent's system prompt. This is the instruction set that controls how the subagent behaves. Structure it clearly:

  1. Role statement — What the agent is ("You are a test execution specialist")
  2. Step-by-step workflow — What to do when invoked
  3. Output format — How to structure the response returned to the parent
  4. Constraints — What not to do (e.g., "never modify code, only report findings")

The Claude Code Agent SDK

For building standalone applications powered by Claude Code's agent capabilities, Anthropic provides the Claude Code SDK (@anthropic-ai/claude-code).

Installation

$ npm install @anthropic-ai/claude-code

Basic Usage (TypeScript)

import { claude } from '@anthropic-ai/claude-code';

// Run Claude Code programmatically
const result = await claude({
  prompt: "Fix the failing test in src/auth.test.ts",
  workingDirectory: "/path/to/project",
  options: {
    allowedTools: ["Read", "Edit", "Bash", "Glob", "Grep"],
    maxTurns: 20,
  }
});

console.log(result.messages);

Python SDK

from anthropic_claude_code import claude

# Run Claude Code from Python
result = claude(
    prompt="Add comprehensive tests for the User model",
    working_directory="/path/to/project",
    allowed_tools=["Read", "Edit", "Bash"],
    max_turns=15
)

SDK Use Cases

Multi-Agent Patterns & Workflows

Pattern 1: Research-then-Act

The most common pattern. Spawn Explore subagents to gather information, then use the results to make informed changes.

Step 1: Research (parallel subagents)
  Explore agent → "Find all auth-related files"
  Explore agent → "Find all API route definitions"
  Explore agent → "Check test coverage for auth module"

Step 2: Plan (based on research results)
  Plan agent → "Design JWT migration based on these findings..."

Step 3: Execute (primary agent makes changes)
  Primary agent writes code using full context

Pattern 2: Worktree Isolation

For experimental or risky changes, spawn a subagent in an isolated git worktree. If the changes don't work out, the worktree is discarded automatically.

Task({
  subagent_type: "general-purpose",
  prompt: "Try refactoring the auth system to use Passport.js. Run tests to verify.",
  isolation: "worktree"  // isolated git copy
})

Pattern 3: Background Monitoring

Spawn long-running subagents in the background while continuing to work interactively.

Task({
  subagent_type: "Bash",
  prompt: "Run the full integration test suite",
  run_in_background: true
})
// Primary agent continues working...
// Gets notified when tests complete

Pattern 4: Fan-out / Fan-in

Distribute work across multiple subagents, then aggregate results. Ideal for large codebases.

// Fan-out: search across modules in parallel
Task({ prompt: "Search src/api/ for security issues" })
Task({ prompt: "Search src/auth/ for security issues" })
Task({ prompt: "Search src/database/ for security issues" })

// Fan-in: primary agent combines all findings into one report

Best Practices

Writing Effective Agent Prompts

  1. Be specific about output format — Tell the subagent exactly what to return: "Report file paths, function names, and a one-line summary for each finding"
  2. Set clear boundaries — Specify what the agent should and shouldn't do: "Only search, do not modify any files"
  3. Provide context — Include relevant information the subagent needs: file paths, error messages, or constraints
  4. Use the right agent type — Don't use a general-purpose agent for simple searches; use Explore for speed

Performance Tips

Security Considerations

Real-World Examples

Example: Feature Development Agent

A common plugin pattern combines multiple agents into a guided feature development workflow:

---
name: feature-dev
description: |
  Guided feature development with codebase understanding and architecture
  focus. Use when user says "build a feature", "implement", or "add support for".
tools:
  - Read
  - Edit
  - Write
  - Bash
  - Glob
  - Grep
  - Task    # Can spawn sub-subagents!
---

You are a feature development specialist. Follow this workflow:

1. **Understand**: Use code-explorer subagent to analyze the existing codebase
2. **Design**: Use code-architect subagent to create an implementation plan
3. **Implement**: Write the code following the architectural plan
4. **Verify**: Run tests and use code-reviewer subagent to check quality
5. **Report**: Summarize what was built and any remaining TODOs

Example: Security Audit Agent

---
name: security-audit
description: |
  Run a comprehensive security audit. Trigger on "security audit",
  "check for vulnerabilities", "OWASP scan", or "security review".
tools:
  - Read
  - Glob
  - Grep
  - Bash
---

Scan the codebase for OWASP Top 10 vulnerabilities:

1. SQL injection (look for raw SQL queries, unsanitized inputs)
2. XSS (look for unescaped user content in templates)
3. Authentication issues (hardcoded secrets, weak hashing)
4. SSRF (user-controlled URLs in server-side requests)
5. Dependency vulnerabilities (run npm audit / pip audit)

Report findings with severity (Critical/High/Medium/Low),
file path, line number, and recommended fix.

Frequently Asked Questions

Can subagents spawn their own subagents?

Yes, if the subagent has access to the Task tool. This enables hierarchical agent workflows where a high-level orchestrator delegates to mid-level agents, which further delegate to specialized workers. However, keep nesting shallow (2-3 levels max) to avoid excessive token usage.

How many subagents can run in parallel?

There's no hard limit on concurrent subagents, but each one consumes API tokens and compute resources. In practice, 3-5 parallel subagents is the sweet spot for most tasks. Going beyond 10 rarely provides additional benefit and increases costs significantly.

Do subagents share context with the parent?

No. Each subagent starts with a fresh context window containing only the prompt you provide. It cannot see the parent's conversation history (unless the agent type supports "access to current context"). Results are returned as a single summary message.

What happens if a subagent fails?

The parent agent receives the error or partial results. It can then retry with a different approach, spawn a new subagent with adjusted instructions, or handle the failure gracefully. Subagents don't crash the parent session.

How do I debug subagent behavior?

Use run_in_background: true and check the output file for the full transcript. You can also use max_turns to limit execution and inspect intermediate steps. For plugin agents, test them interactively before deploying.

Are agents the same as MCP servers?

No. Agents and MCP servers serve different purposes. Agents are autonomous AI processes that plan and execute tasks. MCP servers provide tools and data sources that agents (and the primary session) can use. Think of agents as workers and MCP servers as toolboxes.