Prelude

Claude Code does not have to be a tool used only at the desk. Imagine reviewing a pull request on a phone, wishing Claude could just look at the changes and flag anything off. No terminal. No local Claude Code. The review sits there, waiting.

Anthropic ships an official GitHub Action, anthropics/claude-code-action@v1, that runs Claude Code inside your CI/CD pipeline. Not a wrapper. Not a thin API call. The actual Claude Code runtime, executing in a GitHub Actions runner, reading your repository, analysing your diffs, and posting its findings directly to your pull requests.

This changes how teams work with code review, issue triage, documentation, and release management. Over the past several months we have built a collection of workflow recipes that cover the most common scenarios. This guide walks through each of them, from basic setup to advanced multi-step pipelines.

If you have been using Claude Code locally and want to extend that same intelligence into your CI/CD pipeline, this is where to start.

The Problem

Code review is one of the most valuable and most time-consuming parts of software development. A thorough review catches bugs, enforces standards, and shares knowledge across the team. But reviewers are human. They get tired. They skim long diffs.

They miss edge cases in unfamiliar code.

The same applies to other repetitive tasks in the pull request lifecycle. Writing release notes. Updating documentation when APIs change. Generating tests for new code. Triaging issues and turning feature requests into implementations.

Each of these tasks is important, each is tedious, and each follows patterns that an AI can learn.

Before GitHub Actions integration, the only way to get Claude's input on a PR was to copy and paste the diff into a Claude Code session, ask for a review, and then manually transfer the feedback back to GitHub. That works for one PR. It does not scale to a team processing dozens of PRs per week.

The deeper problem is that local Claude Code usage creates a knowledge silo. The developer who used Claude to write the code benefits from Claude's analysis. The reviewers who come later do not. Automated Claude Code reviews mean every PR gets the same thorough analysis, regardless of who opened it or when they opened it.

The goal is to make Claude Code a permanent member of the review team. Always available, always thorough, always consistent.

The Journey

What Claude Code GitHub Actions Enables

The Claude Code GitHub Action runs Claude Code inside a GitHub Actions runner. This means Claude has full access to your repository, your file structure, your git history, and your diff context. It is not a simple API call that sends a prompt and gets a response. It is the full Claude Code runtime, with tool use, file reading, and multi-step reasoning.

This opens up several categories of automation.

Automated code review. Claude analyses every pull request when it is opened or updated. It reads the diff, understands the context of the changes, and posts specific, actionable feedback as PR comments.

Issue-to-PR workflows. When someone mentions @claude in an issue, the action triggers Claude to read the issue, understand the request, write the implementation, and open a pull request.

Documentation updates. When code changes land, Claude detects which documentation pages are affected and updates them automatically.

Test generation. Claude reads new code in a PR and writes corresponding tests, committing them to the same branch.

Release notes. When a release is tagged, Claude summarises all changes since the last release and generates human-readable release notes.

Each of these is a workflow recipe built on the same foundation. Once you understand the setup, building new recipes is straightforward.

Setting Up the Action

The basic setup requires two things. The GitHub Action definition in your repository, and an Anthropic API key stored as a GitHub secret.

First, add your API key. Go to your repository settings, then Secrets and variables, then Actions. Create a new repository secret called ANTHROPIC_API_KEY and paste your key.

Then create a workflow file. The simplest possible workflow looks like this.

name: Claude Code Review
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: "Review this PR for bugs, security issues, and code quality. Be specific and actionable."

Save this as .github/workflows/claude-review.yml in your repository. Every time a pull request is opened or updated, Claude Code will run, review the changes, and post its analysis as a comment on the PR.

The prompt field is where you tell Claude what to do. Think of it as the system prompt for the CI context. Be specific about what you want reviewed and how you want the feedback formatted.

Required Secrets and Permissions

The ANTHROPIC_API_KEY secret is the minimum requirement. Store it as a repository secret, not an environment variable, and never commit it to your codebase.

For workflows that need to create PRs, push commits, or post comments, you also need to configure GitHub token permissions. The default GITHUB_TOKEN that Actions provides is usually sufficient, but you may need to adjust its permissions in your workflow.

permissions:
  contents: write
  pull-requests: write
  issues: read

If your organisation uses fine-grained personal access tokens, create one with the specific permissions your workflow needs and store it as a separate secret.

For enterprise deployments, consider creating a dedicated service account for Claude Code Actions. This keeps the API usage separate from individual developer accounts and makes it easier to track costs and manage access. More about organisational controls is covered in the Claude Code Organisation Rollout Playbook.

Recipe 1. Automated PR Code Review

This is the most commonly used recipe. Every pull request gets an automated review within minutes of being opened.

name: Claude PR Review
on:
  pull_request:
    types: [opened, synchronize]
    paths-ignore:
      - '*.md'
      - 'docs/**'

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Review this pull request thoroughly. Focus on:
            1. Logic errors and potential bugs
            2. Security vulnerabilities (SQL injection, XSS, auth bypasses)
            3. Performance issues (N+1 queries, unnecessary allocations, blocking calls)
            4. Error handling gaps
            5. API contract changes that could break clients

            Format your review as:
            ## Summary
            One paragraph overview of the changes.

            ## Issues Found
            List each issue with file path, line number, severity (critical/warning/suggestion), and explanation.

            ## Positive Notes
            Highlight anything done particularly well.

            Do NOT comment on style, formatting, or naming unless it creates a genuine readability problem.

A few details matter here. The fetch-depth: 0 in the checkout step ensures Claude has access to the full git history, not just the latest commit. This lets Claude understand the context of the changes better. The paths-ignore filter prevents Claude from reviewing documentation-only changes, which saves API costs.

The prompt explicitly tells Claude what to focus on and what to ignore. Without this guidance, Claude will comment on everything, including style preferences that should be handled by a linter. Being explicit about ignoring style issues dramatically improves the signal-to-noise ratio of the reviews.

Recipe 2. Issue-to-PR Automation

This recipe turns GitHub issues into pull requests. When someone creates an issue with enough detail, they can mention @claude in a comment, and Claude will read the issue, implement the changes, and open a PR.

name: Claude Issue to PR
on:
  issue_comment:
    types: [created]

jobs:
  implement:
    if: contains(github.event.comment.body, '@claude')
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Read the issue description and the comment that triggered this workflow.
            Implement the requested changes.
            Create a new branch, commit your changes, and open a pull request.
            Reference the issue number in the PR description.
            If the request is unclear or too large, post a comment explaining what
            clarification you need instead of implementing.

The @claude mention pattern is powerful. It means anyone on the team can trigger Claude by simply writing a comment. The if condition on the job ensures the workflow only runs when someone explicitly asks for Claude's help, rather than on every comment.

This works best for small, well-defined tasks. "Add a validation check for empty strings in the signup form." "Update the rate limit from 100 to 200 requests per minute." "Add a unit test for the edge case described in this issue." These are tasks that take a developer five minutes of context-switching but that Claude can handle entirely autonomously.

For larger tasks, Claude will often post a comment back asking for clarification rather than making assumptions. This is the right behaviour, and the prompt encourages it.

Recipe 3. Automated Documentation Updates

When code changes, documentation goes stale. This recipe detects changes to specific files and asks Claude to update the corresponding documentation.

name: Claude Docs Update
on:
  push:
    branches: [main]
    paths:
      - 'src/api/**'
      - 'src/lib/**'

jobs:
  update-docs:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 2
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Compare the current commit with the previous commit.
            Identify any changes to public APIs, function signatures, or behaviour.
            Update the corresponding documentation files in the docs/ directory.
            If new APIs were added, create new documentation pages following the
            existing format.
            Commit your changes and open a PR titled "docs: update for recent API changes".

This runs on pushes to main, not on PRs. The idea is that once code merges, the documentation update follows immediately. The paths filter ensures it only triggers when source code changes, not when someone edits a README.

The fetch-depth: 2 is important here. Claude needs to see the previous commit to understand what changed. A depth of 2 gives it the current commit and its parent, which is enough for a diff comparison.

Recipe 4. Test Generation

This recipe asks Claude to write tests for new code introduced in a pull request.

name: Claude Test Generation
on:
  pull_request:
    types: [opened]
    paths:
      - 'src/**'
      - '!src/**/*.test.*'
      - '!src/**/*.spec.*'

jobs:
  generate-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Analyse the new or modified source files in this PR.
            For each file that lacks corresponding test coverage, write tests.
            Follow the existing test patterns in the repository.
            Use the same testing framework already in use.
            Commit the tests to this branch.
            Post a comment summarising what tests were added and what they cover.

The path filters are critical. The !src/**/*.test.* exclusion prevents an infinite loop where Claude generates tests, the push triggers the workflow again, Claude generates more tests, and so on. Only source file changes trigger the workflow, never test file changes.

This runs only on opened events, not synchronize. If Claude generates tests when the PR is opened, subsequent pushes to the branch should not trigger another round of test generation. The developer can manually request more tests by commenting @claude if needed.

Recipe 5. Release Notes Generation

When you tag a release, this recipe generates human-readable release notes by analysing all commits since the last release.

name: Claude Release Notes
on:
  release:
    types: [created]

jobs:
  release-notes:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: |
            Generate release notes for this release.
            Compare the current tag with the previous tag.
            Categorise changes into: Features, Bug Fixes, Performance, Breaking Changes, and Other.
            Write each entry in plain language that a user (not a developer) can understand.
            Include the PR number for each change where available.
            Update the release body on GitHub with the generated notes.

The fetch-depth: 0 is essential here because Claude needs the full git history to find the previous tag and compare commits. Without it, the shallow clone would not contain enough history.

This recipe saves significant time. Writing release notes manually means reading through dozens of commits, understanding each change, and translating technical descriptions into user-facing language. Claude does this in under a minute.

The @claude Mention Pattern

Several of these recipes use the @claude mention pattern, where team members trigger Claude by mentioning it in a comment. This pattern deserves its own discussion because it is remarkably flexible.

The basic trigger checks for @claude in a comment body.

if: contains(github.event.comment.body, '@claude')

You can extend this to support different commands.

if: |
  contains(github.event.comment.body, '@claude review') ||
  contains(github.event.comment.body, '@claude test') ||
  contains(github.event.comment.body, '@claude fix')

Then in your prompt, you can parse the command and adjust behaviour accordingly. This turns your GitHub issues and PRs into a conversational interface with Claude.

The mention pattern also works well with team workflows. Junior developers can request Claude's review before asking a senior developer. Product managers can ask Claude to explain what a PR changes in non-technical language. QA engineers can ask Claude to identify edge cases that need manual testing.

Configuring CLAUDE.md for CI Context

Your repository's CLAUDE.md file is read by Claude Code in both local and CI contexts. But CI often needs different instructions than local development. There are several approaches to handling this.

The simplest is to add a CI-specific section to your CLAUDE.md.

## CI/CD Context

When running in GitHub Actions:
- Do not modify configuration files unless explicitly asked
- Always create a new branch for changes, never commit directly to main
- Format PR comments using GitHub-flavoured Markdown
- Include file paths as clickable links in review comments

A more sophisticated approach is to use the prompt field in your workflow to override or supplement the CLAUDE.md instructions.

- uses: anthropics/claude-code-action@v1
  with:
    anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
    prompt: |
      You are running in CI mode. Follow the repository's CLAUDE.md,
      but prioritise these additional instructions:
      - Never run destructive commands
      - Do not install new dependencies
      - Time-box your analysis to the changed files only

If you have built hooks for your local workflow, note that those hooks run locally in your development environment. They do not automatically apply in the GitHub Actions runner. If you need similar guardrails in CI, implement them as part of the workflow definition or as pre-steps in your job.

Security Considerations

Running AI in your CI/CD pipeline introduces security surface area that you need to think about carefully.

API key management. Your ANTHROPIC_API_KEY should be stored as a repository secret or, better, as an organisation-level secret. Never hardcode it, never log it, never pass it as a workflow input that could be visible in logs.

Permission scoping. The GitHub token permissions should be as narrow as possible. If your workflow only posts comments, it does not need contents: write. If it does not interact with issues, remove issues: read. The principle of least privilege applies here just as it does anywhere else.

Fork safety. By default, pull requests from forks do not have access to repository secrets. This is important because it means untrusted contributors cannot trigger your Claude workflow and consume your API credits. If you need to support fork PRs, use the pull_request_target event instead of pull_request, but be extremely careful about what code the workflow checks out.

Prompt injection. If your workflow reads user-supplied content (issue descriptions, PR bodies, comments) and passes it to Claude, consider whether an attacker could craft content that manipulates Claude's behaviour. The prompt field in your workflow should include clear instructions that take precedence over any user-supplied content.

Secrets in output. Claude's responses are posted as comments on your PRs. Ensure your prompt instructs Claude never to include secrets, API keys, or sensitive configuration values in its output.

Cost Management

Claude Code GitHub Actions consume API credits every time they run. Without controls, costs can escalate quickly on active repositories.

Filter by event type. Not every PR event needs a review. Use types: [opened] instead of types: [opened, synchronize] if you only want a review on the first push, not on every subsequent commit.

Filter by path. Use paths and paths-ignore to skip workflows for changes that do not need AI review. Documentation updates, CI config changes, and dependency bumps rarely need Claude's analysis.

Use appropriate models. For simple reviews that check formatting and obvious errors, you can specify a faster, less expensive model. For deep architectural reviews, use the full model. Pass the model via claude_args: --model claude-sonnet-4-6 in your workflow's with: block.

Set concurrency limits. GitHub Actions supports concurrency groups that prevent multiple instances of the same workflow from running simultaneously.

concurrency:
  group: claude-review-${{ github.event.pull_request.number }}
  cancel-in-progress: true

This ensures that if a developer pushes three commits in quick succession, only the last push triggers a review. The previous runs are cancelled.

Monitor usage. Track your Anthropic API usage through their dashboard. Set up billing alerts so you know when usage spikes. Consider setting a monthly budget and pausing the workflow when it is reached.

Debugging Failed Actions

When a Claude Code Action fails, the debugging process follows a predictable pattern.

Check the Actions log. Go to your repository's Actions tab, find the failed run, and read the logs. The Claude Code Action logs its output, including any errors from the Anthropic API.

Common errors. The most frequent issues are these.

Authentication failures. The API key is missing, expired, or incorrectly stored. Verify the secret name matches exactly what your workflow references.

Rate limiting. If you trigger too many workflows simultaneously, you may hit the Anthropic API rate limit. Add concurrency controls and stagger your workflows.

Timeout. Complex reviews on large PRs can exceed the default timeout. You can increase the max_tokens parameter or split your workflow into smaller, focused steps.

Checkout issues. If Claude cannot find the files it expects, the checkout step may have failed or used insufficient fetch-depth. Always check that the checkout step completed successfully.

Permission errors. If Claude tries to push commits but gets a permission error, check your permissions block and ensure the GITHUB_TOKEN has the necessary access.

Iterative debugging. Use workflow_dispatch as an additional trigger during development. This lets you manually trigger the workflow from the Actions tab without creating a real PR.

on:
  pull_request:
    types: [opened]
  workflow_dispatch:
    inputs:
      prompt:
        description: 'Custom prompt for testing'
        required: false

Advanced Patterns

Once you have the basics working, there are several advanced patterns worth exploring.

Multi-step workflows. Chain multiple Claude invocations in a single workflow. The first step reviews code, the second generates tests, the third updates documentation. Each step builds on the previous one.

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: anthropics/claude-code-action@v1
        id: review
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: "Review this PR and identify any issues."
      - uses: anthropics/claude-code-action@v1
        if: steps.review.outputs.response != ''
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: "Based on the review findings, generate tests that cover the identified edge cases."

Conditional invocation. Use job outputs and conditions to run Claude only when specific criteria are met. For example, only run a security review when files in the auth/ directory change.

jobs:
  check-paths:
    runs-on: ubuntu-latest
    outputs:
      security-review: ${{ steps.filter.outputs.auth }}
    steps:
      - uses: dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            auth:
              - 'src/auth/**'
              - 'src/middleware/auth*'

  security-review:
    needs: check-paths
    if: needs.check-paths.outputs.security-review == 'true'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: "Perform a thorough security review of the authentication and authorisation changes in this PR."

Cross-repository workflows. If your organisation has shared libraries, you can trigger Claude reviews in downstream repositories when an upstream dependency changes. This requires a personal access token with cross-repository permissions and a repository_dispatch event.

Combining with hooks. Your local Claude Code hooks handle development-time automation. Your GitHub Actions handle CI-time automation. Together, they create a comprehensive quality layer.

Hooks catch issues before code is committed. Actions catch issues before code is merged. This double layer significantly reduces the number of bugs that reach production.

The Lesson

The most important lesson from running Claude Code in CI/CD is that the quality of your prompt determines the quality of your automation. A vague prompt like "review this code" produces generic, unhelpful feedback. A specific prompt that describes exactly what to look for, how to format findings, and what to ignore produces reviews that are genuinely useful.

The second lesson is about cost awareness. It is easy to set up a workflow that triggers on every push to every branch and runs a comprehensive review every time. It is also expensive. Be deliberate about which events trigger Claude, which files are included, and how much analysis is appropriate for each situation.

The third lesson is that CI-based Claude Code is not a replacement for human review. It is a first pass that catches the obvious issues, frees human reviewers to focus on architecture and design decisions, and ensures that every PR gets at least a baseline level of scrutiny. The best results come when Claude and human reviewers complement each other.

Finally, the organisational benefit compounds over time. When automated reviews are first introduced, teams are often sceptical. As the weeks pass, teams frequently report shorter review queues and more consistent feedback. For teams rolling out these practices, the Organisation Rollout Playbook covers how to introduce automated reviews gradually and get buy-in from the team.

Conclusion

Claude Code GitHub Actions extend Claude from a local development tool into a team-wide automation platform. The recipes in this guide cover the most common use cases, but they are starting points. Every team has unique workflows, unique quality standards, and unique pain points.

Start with the automated PR review recipe. It delivers immediate value, requires minimal setup, and gives your team a concrete example of what AI-powered CI/CD looks like. Once that is running smoothly, add the recipes that address your specific bottlenecks.

Issue-to-PR automation for teams drowning in small tasks. Test generation for codebases with coverage gaps. Release notes for teams that dread release day.

The action itself is straightforward. The real work is in crafting prompts that produce consistently useful output for your specific codebase and your specific team. Invest time in that, iterate on your prompts as you learn what works, and you will build a CI/CD pipeline that gets smarter with every pull request.