Claude Code GitHub Actions: Automate Code Review, Testing & CI/CD
Claude Code is not limited to your terminal. With Claude Code GitHub Actions, you can bring AI-powered code review, automated testing, security scanning, and documentation generation directly into your CI/CD pipelines. Every pull request gets an intelligent review. Every push triggers context-aware analysis. And it all runs without human intervention.
This guide covers everything you need to set up Claude Code automation in GitHub Actions -- from basic PR review workflows to advanced multi-step CI/CD pipelines with MCP server integration.
Why Use Claude Code in GitHub Actions?
Running Claude Code locally is great for interactive development, but CI/CD integration unlocks a different category of value. When Claude Code runs as part of your GitHub Actions pipeline, every pull request and push benefits from AI analysis automatically -- no developer needs to remember to run it.
- Consistent code review -- every PR gets reviewed against the same standards, regardless of reviewer availability
- Faster feedback loops -- developers get AI review comments within minutes of opening a PR
- Scalable automation -- works for teams of 2 or 200 without additional headcount
- Quality enforcement -- combine Claude Code analysis with hooks and existing linting for multi-layered quality gates
- Documentation that stays current -- auto-generate docs whenever code changes
The Official Claude Code GitHub Action
Anthropic provides the official anthropics/claude-code-action for integrating Claude Code into GitHub Actions. This action handles the non-interactive execution, GitHub API integration for posting comments, and secure credential management.
Basic setup
Create a workflow file at .github/workflows/claude-code.yml:
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
issue_comment:
types: [created]
jobs:
claude-review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
issues: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
model: "claude-sonnet-4-5"
prompt: |
Review this pull request. Focus on:
- Correctness and potential bugs
- Security vulnerabilities
- Performance implications
- Code style consistency
This workflow triggers whenever a PR is opened or updated. Claude Code analyzes the diff and posts review comments directly on the pull request.
How it works under the hood
The action runs Claude Code in non-interactive mode (headless). It pipes the prompt along with the PR diff context to Claude, collects the output, and uses the GitHub API to post structured review comments. Claude sees the full diff, can reference specific files and line numbers, and can post inline suggestions using GitHub's suggestion syntax.
Setting Up API Keys and Secrets
Before any Claude Code GitHub Actions workflow can run, you need to configure your API credentials securely.
Step 1: Get your API key
Create an API key from the Anthropic Console. For CI/CD usage, create a dedicated key separate from your personal development key so you can track costs and rotate credentials independently.
Step 2: Add the secret to GitHub
- Go to your repository on GitHub
- Navigate to Settings > Secrets and variables > Actions
- Click New repository secret
- Name:
ANTHROPIC_API_KEY - Value: your API key
Never hardcode API keys in workflow YAML files. Always use GitHub secrets. Repository secrets are encrypted and only exposed to workflows running in that repository.
Step 3: Set permissions
Your workflow needs specific permissions to interact with pull requests. At minimum, include:
permissions:
contents: read # Read repository files
pull-requests: write # Post review comments
issues: write # Comment on issues (if using issue triggers)
Complete Workflow Examples
Here are production-ready workflow configurations for common Claude Code automation patterns.
Automated code review on pull requests
The most popular use case. Claude reviews every PR and posts specific, actionable feedback:
name: AI Code Review
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
model: "claude-sonnet-4-5"
max_tokens: 4096
prompt: |
Review this PR diff thoroughly. For each issue found:
1. Identify the file and line number
2. Explain the problem clearly
3. Suggest a specific fix
Focus areas: bugs, security, performance, readability.
Skip nitpicks on formatting -- our linter handles that.
If the code looks good, say so briefly.
Automated test generation
Generate or update tests whenever source code changes. This pairs well with a testing workflow:
name: AI Test Generation
on:
pull_request:
paths:
- 'src/**/*.ts'
- 'src/**/*.tsx'
jobs:
generate-tests:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.head_ref }}
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Look at the changed files in this PR.
For any source file that was modified or added:
- Check if a corresponding test file exists
- If tests exist, update them to cover new code paths
- If no tests exist, create them following the
existing test patterns in this repository
- Run the tests to make sure they pass
allowed_tools: "Read,Write,Edit,Bash"
Security scanning
Use Claude Code to perform intelligent security analysis that goes beyond pattern matching:
name: AI Security Scan
on:
pull_request:
types: [opened, synchronize]
jobs:
security:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Perform a security review of this PR. Check for:
- SQL injection, XSS, CSRF vulnerabilities
- Hardcoded secrets, API keys, or credentials
- Insecure deserialization or input handling
- Authentication/authorization bypass risks
- Dependency vulnerabilities in any new packages
- Insecure file operations or path traversal
Rate each finding as Critical, High, Medium, or Low.
If no security issues found, confirm the PR is clean.
Documentation generation
Keep documentation in sync with code changes automatically:
name: AI Documentation
on:
push:
branches: [main]
paths:
- 'src/**'
- 'lib/**'
jobs:
docs:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Review the recently changed files and update
relevant documentation:
- Update JSDoc/TSDoc comments if function
signatures changed
- Update README sections if public API changed
- Update CHANGELOG.md with a summary of changes
allowed_tools: "Read,Write,Edit,Bash"
- uses: stefanzweifel/git-auto-commit-action@v5
with:
commit_message: "docs: auto-update from Claude Code"
Claude Code as a CI/CD Agent
Beyond single-task workflows, you can use Claude Code as a full CI/CD agent that makes decisions based on context. This is where the Claude Code SDK becomes particularly powerful -- you can orchestrate multi-step pipelines where Claude decides what to do next based on the results of previous steps.
Responding to issue comments
Configure Claude Code to respond when someone tags it in a PR comment:
name: Claude Code Agent
on:
issue_comment:
types: [created]
jobs:
respond:
if: contains(github.event.comment.body, '@claude')
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
trigger_phrase: "@claude"
allowed_tools: "Read,Write,Edit,Bash"
Now team members can write comments like @claude fix the type error in utils.ts or @claude add tests for the new endpoint and Claude will respond directly on the PR with code changes.
Multi-step pipeline
Chain Claude Code steps together for complex automation:
name: Full CI Pipeline with Claude
on:
pull_request:
types: [opened, synchronize]
jobs:
analyze:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- name: Run tests and capture output
id: tests
run: npm test -- --reporter=json > test-results.json 2>&1 || true
- name: Run linter
id: lint
run: npm run lint -- --format=json > lint-results.json 2>&1 || true
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Analyze this PR with the following context:
Test results are in test-results.json.
Lint results are in lint-results.json.
1. Summarize what this PR does
2. Report any test failures and suggest fixes
3. Report any lint issues
4. Review the code for bugs and security issues
5. Post a single comprehensive review comment
Cost Management and Rate Limiting
Running Claude Code in CI/CD can generate significant API costs if unchecked. Here are strategies to keep costs predictable without sacrificing value.
Filter by file paths
Only trigger on relevant file changes:
on:
pull_request:
paths:
- 'src/**'
- 'lib/**'
- '!**/*.test.ts' # Skip test-only changes
- '!**/*.md' # Skip documentation
- '!.github/**' # Skip workflow changes
Set token limits
Use max_tokens to cap response length and control per-invocation cost:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
max_tokens: 2048 # Shorter responses = lower cost
Choose the right model
Not every task needs the most powerful model. Use claude-sonnet-4-5 for routine reviews and reserve claude-opus-4-6 for complex analysis. See the pricing guide for detailed cost comparisons.
Rate limiting strategies
- Concurrency limits -- use GitHub's
concurrencykey to prevent parallel runs on the same PR - Skip drafts -- add
if: github.event.pull_request.draft == falseto skip draft PRs - Label gating -- only trigger on PRs with a specific label like
ai-review - Scheduled runs -- batch non-urgent analysis into scheduled workflows instead of per-push
jobs:
review:
if: |
github.event.pull_request.draft == false &&
contains(github.event.pull_request.labels.*.name, 'ai-review')
concurrency:
group: claude-review-${{ github.event.pull_request.number }}
cancel-in-progress: true
GitHub Actions vs Local Claude Code
Understanding when to use each mode helps you get the most value from Claude Code automation.
- Interactive development (local) -- exploratory coding, prototyping, debugging sessions, multi-turn conversations where you iterate on an approach. Use the Playground or terminal directly.
- Automated review (CI) -- standardized code review, security scanning, documentation updates. Runs on every PR without developer intervention.
- Agent tasks (CI) -- responding to @mentions in PRs, automated fixes for common issues, generating boilerplate. Claude acts autonomously within defined boundaries.
- Complex analysis (local) -- architectural decisions, large refactors, performance optimization where you need back-and-forth dialogue.
A productive setup uses both: local Claude Code for development with best practices, and GitHub Actions for the automated safety net that catches issues before they reach main.
Best Practices for Non-Interactive Mode
Claude Code behaves differently in CI than in your terminal. Here are the key considerations for reliable Claude Code automation in GitHub Actions.
Write self-contained prompts
In interactive mode, you can clarify and iterate. In CI, the prompt must contain everything Claude needs:
- Be explicit about what to analyze (the diff, specific files, test results)
- Specify the output format (inline comments, summary comment, JSON)
- Define the scope -- what to focus on and what to skip
- Include your CLAUDE.md conventions so Claude follows your team's standards
Handle failures gracefully
API errors, rate limits, and timeouts will happen. Make your workflows resilient:
- uses: anthropics/claude-code-action@v1
continue-on-error: true # Don't block the pipeline
timeout-minutes: 10 # Prevent runaway costs
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: "Review this PR for bugs and security issues."
Use allowed_tools carefully
In CI, you control which tools Claude can use. For read-only review, limit tools to prevent unintended changes:
- Read-only review: don't pass
allowed_tools-- the action defaults to safe read-only analysis - Code fixes:
allowed_tools: "Read,Write,Edit,Bash"when you want Claude to make changes - Full agent: include all tools when Claude needs to run tests, install packages, etc.
Integrating MCP Servers in CI
Claude Code supports MCP servers in GitHub Actions, enabling it to interact with external services during CI runs. This opens up powerful automation patterns.
Common CI MCP integrations
- Database MCP -- validate migrations against a staging database
- Sentry/error tracking MCP -- correlate code changes with production errors
- Notion/Linear MCP -- auto-update project management when PRs are merged
- Custom API MCP -- integrate with internal tools and services
Configuring MCP in workflows
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
mcp_config: |
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "${{ secrets.STAGING_DB_URL }}"
}
}
}
}
prompt: |
Review the database migrations in this PR.
Connect to the staging database to verify
schema compatibility.
Be cautious with MCP servers in CI. Only connect to staging or read-only databases. Never give CI workflows write access to production systems through MCP. Audit which secrets your workflows have access to.
Combining with Skills and CLAUDE.md
Your CI workflows can leverage the same skills and CLAUDE.md configuration that you use locally. The Claude Code action automatically picks up your repository's CLAUDE.md, so your team's coding standards apply in CI just as they do in local development.
To reference skills in your CI prompt:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Use the review skill defined in .claude/skills/review.md
to review this pull request. Follow all conventions
defined in CLAUDE.md.
This ensures consistent behavior whether a developer triggers a review locally or the CI pipeline does it automatically. See the best practices guide for how to structure skills effectively.
Troubleshooting Common Issues
Claude's review is too verbose or too brief
Adjust your prompt specificity. Tell Claude exactly how detailed you want the review: "Post 1-3 comments on the most important issues only" or "Provide a comprehensive line-by-line review."
Action fails with permission errors
Ensure your workflow has the correct permissions block. The action needs pull-requests: write to post comments and contents: read to access the code. For organization repositories, check that GitHub Actions has permission to use the required scopes.
Rate limiting or timeout
If reviews time out on large PRs, increase timeout-minutes and consider breaking the review into focused passes (security, bugs, style) rather than one monolithic prompt.
Claude doesn't see the full context
Use fetch-depth: 0 in your checkout step so the full git history is available. For large repos, you can use a shallow clone but make sure the base branch is fetched for accurate diff comparison.
Getting Started Checklist
- Create an
ANTHROPIC_API_KEYsecret in your repository settings - Add a basic PR review workflow at
.github/workflows/claude-review.yml - Start with
claude-sonnet-4-5and read-only review -- noallowed_tools - Filter triggers to relevant file paths to control costs
- Write a clear CLAUDE.md so Claude follows your standards in CI
- Test on a few PRs and iterate on prompt wording
- Gradually expand to security scanning, test generation, and agent mode
- Monitor API costs in the Anthropic Console and adjust limits as needed
Start small. A basic PR review workflow takes 5 minutes to set up and immediately provides value. You can always add security scanning, test generation, and GitHub integration features later as you learn what works for your team.