Prelude
Publishing a first Claude Code plugin can take about four hours from the initial idea to a working submission, and most of that time is spent deciding what the plugin should actually do. The mechanical parts, writing the manifest, structuring the directories, testing locally, are straightforward once you understand the conventions.
What may surprise you is how many categories remain underserved. The official plugin directory has a growing ecosystem of plugins, but the vast majority cover narrow use cases. Entire categories are empty. Code quality tooling, the kind of thing every team needs, is underrepresented.
This guide walks through building a code-quality plugin from scratch. It bundles a skill for reviewing code against configurable standards, a pre-commit hook that catches common issues before they reach a pull request, and a slash command that generates test stubs from existing source files. By the end, you will have a working plugin ready for marketplace submission.
If you have been using Claude Code for a while and want to package your workflows for others, this guide covers the complete path.
The Problem
Claude Code is remarkably capable out of the box. But every team has its own standards. Every team's linting rules differ. Test conventions follow patterns specific to each stack. Code review checklists reflect years of accumulated knowledge about what actually breaks in production.
The plugin system exists to bridge that gap. Instead of writing the same CLAUDE.md instructions across every project, you package your standards into a plugin that anyone can install with a single command. Instead of manually configuring hooks for each repository, you distribute them as part of a plugin that sets everything up automatically.
The Claude Code plugin ecosystem has grown rapidly since its launch. Hundreds of plugins now exist across the ecosystem, with the official directory serving as the curated storefront. But there is an important distinction to understand before you start building.
There are two separate marketplaces in the Claude world. The Claude Enterprise Marketplace, which launched on 6 March 2026, is for enterprise software integrations. Companies like Snowflake, GitLab, and Replit publish integrations there for use within Claude's web interface. That marketplace operates at the organisation level and requires a partnership with Anthropic.
The Claude Code Plugin Marketplace is where individual developers publish. It is a GitHub-based directory where plugins are submitted, reviewed, and listed for the Claude Code CLI. This is where your plugin will live, and it is open to anyone.
Both marketplaces are growing. For a deeper look at the enterprise side, see our guide on getting started with the Anthropic Marketplace. This guide focuses entirely on the Claude Code plugin ecosystem, because that is where individual developers can actually ship something today.
The Journey
Understanding the Plugin Ecosystem
A Claude Code plugin is a directory containing a manifest file and one or more components. The components can include skills (instructions that guide Claude's behaviour), hooks (event handlers that fire during Claude's lifecycle), slash commands (shortcuts users invoke directly), agents (custom agent definitions), and MCP server configurations.
Plugins are installed via the CLI. When a user runs /plugin install with your repository URL, Claude Code clones the plugin, validates its structure, and makes it available in all sessions. The plugin's skills appear in the user's skill list, its hooks register automatically, and its commands become available as slash commands.
The plugin reference documentation covers the full specification. What matters for practical purposes is understanding the contract. Your plugin must have a .claude-plugin/plugin.json manifest. Everything else is optional but follows strict directory conventions.
Planning Your Plugin
The best plugins solve a specific, recurring problem. Broad plugins that try to do everything tend to be mediocre at all of it. Narrow plugins that do one thing well get installed and kept.
For the code-quality plugin in this guide, the starting point was listing the three things developers do most often when reviewing code.
First, checking for common issues. Unused imports, inconsistent naming, missing error handling, functions that are too long. This becomes the skill component.
Second, running checks before committing. Linting, formatting, type checking. This becomes the hook component.
Third, generating test stubs for new code. When writing a new module, having a corresponding test file with the right imports and empty test functions saves time. This becomes the slash command.
That scope felt right. Three related capabilities that work together but are each useful independently. A developer could install the plugin for just the review skill and ignore the hook entirely.
One principle worth internalising early. Do not try to replace existing tools. The plugin should make Claude Code better at working with existing tools, not replace them. The hook in this example runs the project's own linter, whatever that happens to be. It does not implement its own linting logic.
Creating the Plugin Structure
Every plugin follows the same directory layout. Here is the complete structure for the code-quality plugin.
code-quality/
├── .claude-plugin/
│ └── plugin.json
├── skills/
│ └── SKILL.md
├── commands/
│ └── generate-tests.md
├── hooks/
│ └── hooks.json
├── README.md
└── CHANGELOG.md
A critical rule from the plugin documentation. Only plugin.json goes inside .claude-plugin/. Never put your skills, commands, hooks, or any other files in that directory. The .claude-plugin/ directory is exclusively for the manifest.
Create the directory structure first.
mkdir -p code-quality/.claude-plugin
mkdir -p code-quality/skills
mkdir -p code-quality/commands
mkdir -p code-quality/hooks
Now write the manifest. The plugin.json file tells Claude Code what your plugin contains and how to find it.
{
"name": "code-quality",
"version": "1.0.0",
"description": "Code review skill, pre-commit quality hook, and test stub generator for Claude Code",
"author": {
"name": "Your Name",
"email": "you@example.com",
"url": "https://github.com/yourusername"
},
"homepage": "https://github.com/yourusername/code-quality",
"repository": "https://github.com/yourusername/code-quality",
"license": "MIT",
"keywords": [
"code-quality",
"code-review",
"testing",
"linting",
"pre-commit"
],
"skills": ["skills/SKILL.md"],
"hooks": "hooks/hooks.json",
"commands": ["commands/generate-tests.md"]
}
The name field must be kebab-case. This becomes the namespace for your plugin's skills and commands. When users invoke your skill, they will type /code-quality:review. When they use your command, they will type /code-quality:generate-tests.
The version field follows Semantic Versioning. Start at 1.0.0. If you are tempted to start at 0.1.0, resist. If the plugin works, it is version 1.0.0.
The keywords array is how users discover your plugin in the marketplace. Choose keywords that describe what your plugin does, not what it is. "code-review" is better than "plugin". "pre-commit" is better than "hook".
Building Your First Skill
Skills are the heart of most plugins. A skill is a markdown file with YAML frontmatter that tells Claude how to approach a specific task. When a user activates your skill, Claude reads the instructions and follows them throughout the session.
Create skills/SKILL.md with the following content.
---
name: "review"
description: "Review code for quality issues, naming conventions, error handling, and common anti-patterns"
---
# Code Quality Review
When reviewing code, follow this structured approach.
## Analysis Steps
1. **Read the full file or diff before commenting.** Do not review line by line. Understand the complete context first, then identify issues.
2. **Check for structural problems first.**
- Functions longer than 40 lines
- Classes with more than 7 public methods
- Files with more than 300 lines
- Deeply nested conditionals (more than 3 levels)
3. **Check naming conventions.**
- Variables should describe what they contain, not their type
- Functions should describe what they do, not how they do it
- Boolean variables should read as questions (isReady, hasPermission, canRetry)
- Avoid abbreviations unless they are universally understood (id, url, http)
4. **Check error handling.**
- Every external call (network, file system, database) must have error handling
- Error messages must include context about what was being attempted
- Never catch and silently swallow errors
- Distinguish between recoverable and unrecoverable errors
5. **Check for common anti-patterns.**
- Magic numbers and strings (should be named constants)
- Repeated code blocks (should be extracted into functions)
- Mutable state where immutable would work
- Unnecessary comments that restate the code
## Output Format
Present findings as a prioritised list. Group issues by severity.
**Critical** items are bugs or security issues that must be fixed.
**Important** items are maintainability problems that should be fixed.
**Suggestion** items are style improvements that could be considered.
For each issue, include the file path, the line number or range, a clear description of the problem, and a concrete suggestion for fixing it.
If the code has no significant issues, say so. Do not invent problems to fill a report.
The frontmatter name field is what appears after the plugin namespace. With name: "review" in a plugin called code-quality, users activate this skill by running /code-quality:review.
The description field appears in skill listings and helps users understand what the skill does before activating it.
The body of the skill is plain markdown. Write it as clear, specific instructions. Avoid vague guidance like "write clean code". Instead, give concrete criteria like "functions longer than 40 lines". Claude follows specific instructions far more reliably than abstract ones.
Adding Hooks for Automation
Hooks let your plugin react to events in Claude Code's lifecycle. The hooks system supports events like PreToolUse, PostToolUse, SessionStart, and more. For a code-quality plugin, the most useful hook is one that runs quality checks when Claude is about to commit code.
Create hooks/hooks.json with the following content.
{
"hooks": [
{
"event": "PreToolUse",
"matcher": {
"tool_name": "Bash",
"input_contains": "git commit"
},
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/pre-commit-check.sh",
"description": "Run quality checks before committing",
"timeout_ms": 30000
}
]
}
This hook fires every time Claude is about to execute a Bash command that contains git commit. The matcher object filters which tool calls trigger the hook. Without a matcher, the hook would fire on every tool call of that event type.
The ${CLAUDE_PLUGIN_ROOT} variable is critical. It resolves to the directory where your plugin is installed on the user's machine. Never use absolute paths or relative paths that assume a specific location. Plugins are cached and copied during installation, so the actual path varies.
Now create the shell script that the hook executes. Add hooks/pre-commit-check.sh to your plugin directory.
#!/usr/bin/env bash
set -euo pipefail
# Read the staged files
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM)
if [ -z "$STAGED_FILES" ]; then
exit 0
fi
ISSUES_FOUND=0
# Check for common problems in staged files
for FILE in $STAGED_FILES; do
# Skip non-text files
if file "$FILE" | grep -q "binary"; then
continue
fi
# Check for debug statements
if grep -n "console\.log\|debugger\|binding\.pry\|import pdb\|print(" "$FILE" 2>/dev/null; then
echo "WARNING: Possible debug statement in $FILE"
ISSUES_FOUND=$((ISSUES_FOUND + 1))
fi
# Check for TODO/FIXME without ticket references
if grep -n "TODO\|FIXME\|HACK\|XXX" "$FILE" 2>/dev/null | grep -v "#[A-Z]\+-[0-9]\+"; then
echo "WARNING: TODO/FIXME without ticket reference in $FILE"
ISSUES_FOUND=$((ISSUES_FOUND + 1))
fi
done
# Run project linter if available
if [ -f "package.json" ] && command -v npx &>/dev/null; then
if npx --no-install eslint --max-warnings 0 $STAGED_FILES 2>/dev/null; then
echo "ESLint passed"
else
echo "ESLint found issues"
ISSUES_FOUND=$((ISSUES_FOUND + 1))
fi
elif [ -f "pyproject.toml" ] && command -v ruff &>/dev/null; then
if ruff check $STAGED_FILES 2>/dev/null; then
echo "Ruff passed"
else
echo "Ruff found issues"
ISSUES_FOUND=$((ISSUES_FOUND + 1))
fi
elif [ -f "Cargo.toml" ] && command -v cargo &>/dev/null; then
if cargo clippy --quiet 2>/dev/null; then
echo "Clippy passed"
else
echo "Clippy found issues"
ISSUES_FOUND=$((ISSUES_FOUND + 1))
fi
fi
if [ $ISSUES_FOUND -gt 0 ]; then
echo ""
echo "Found $ISSUES_FOUND potential issue(s). Review before committing."
# Exit 0 to allow the commit but surface the warnings
# Change to exit 1 if you want to block commits with issues
exit 0
fi
echo "All pre-commit checks passed."
exit 0
Make the script executable.
chmod +x code-quality/hooks/pre-commit-check.sh
There is an important design decision here. The hook exits with code 0 even when issues are found. This means the commit proceeds but the warnings are surfaced to Claude, who can then decide whether to address them. If you want a stricter policy where commits are blocked until all issues are resolved, change the final exit 0 to exit 1.
The permissive approach is recommended because blocking commits can be frustrating during rapid iteration. Claude sees the warnings and typically addresses them in a follow-up commit. For teams that need stricter enforcement, the exit code is the only change required.
The hook also detects the project's existing toolchain. If the project uses Node.js, it runs ESLint. If it uses Python, it runs Ruff. If it uses Rust, it runs Clippy.
This is the principle mentioned earlier. The plugin works with existing tools rather than replacing them.
Adding Slash Commands
Slash commands give users a direct way to invoke specific plugin functionality. Unlike skills, which guide Claude's behaviour throughout a session, commands are one-shot actions. The user types the command, Claude executes the instructions, and the interaction is complete.
Create commands/generate-tests.md with the following content.
---
name: "generate-tests"
description: "Generate test stubs for a source file"
arguments:
- name: "file"
description: "Path to the source file to generate tests for"
required: true
---
Generate a test file for the specified source file. Follow these rules.
1. **Determine the test framework from the project.**
- JavaScript/TypeScript: check for jest, vitest, or mocha in package.json
- Python: check for pytest or unittest in pyproject.toml or requirements
- Rust: use the built-in test module
- Go: use the built-in testing package
- If unclear, ask the user which framework to use
2. **Create the test file in the correct location.**
- JavaScript/TypeScript: `__tests__/[filename].test.[ext]` or `[filename].test.[ext]` next to the source, matching the project's existing pattern
- Python: `tests/test_[filename].py` or matching the project's existing pattern
- Rust: add a `#[cfg(test)] mod tests` block in the same file
- Go: `[filename]_test.go` in the same directory
3. **Generate one test stub per public function or method.**
- Import/require the function from the source file
- Create a test with a descriptive name: `test_[function_name]_[what_it_tests]`
- Add a comment inside each test describing what should be verified
- Leave the test body with a placeholder assertion that will fail: `expect(true).toBe(false)` or equivalent
- Do NOT implement the test logic. The user will fill that in.
4. **Include edge case stubs.** For each function, add stubs for:
- Normal input
- Empty or null input
- Error cases (if the function can throw or return errors)
5. **Output only the test file.** Do not modify the source file.
The arguments block in the frontmatter defines what the command expects. When a user types /code-quality:generate-tests src/utils/parser.ts, Claude receives the file path as the file argument and follows the instructions.
Notice that the command instructions are intentionally prescriptive about where to create the file and how to name tests, but deliberately leave the test logic as stubs. This is a design choice. Generated tests that try to guess the implementation are almost always wrong. Stubs with clear descriptions of what should be tested are actually useful.
Testing Locally
Before you submit anything to the marketplace, test your plugin thoroughly. Claude Code has built-in support for loading plugins from a local directory.
claude --plugin-dir ./code-quality
This starts a Claude Code session with your plugin loaded. You should see the plugin listed when you check available plugins.
Test each component individually.
Test the skill. Activate it by typing /code-quality:review in a session, then ask Claude to review a file. Verify that Claude follows your review criteria and uses the output format you specified.
/code-quality:review
Review the file src/handlers/auth.ts
Watch for these issues during testing.
Does Claude follow the structural checks (function length, class size, nesting depth)? Does it use the severity categories you defined? Does it avoid inventing problems when the code is clean?
Test the hook. Stage some files and ask Claude to commit. The pre-commit hook should fire and display its output.
Stage and commit the changes with message "Add authentication handler"
Verify that the hook detects debug statements, checks for TODO comments without ticket references, and runs the appropriate linter for your project.
Test the command. Run the generate-tests command with a real source file.
/code-quality:generate-tests src/utils/parser.ts
Verify that the generated test file uses the correct framework, appears in the right location, and has stubs for every public function.
If something is not working, validate the plugin structure.
claude plugin validate ./code-quality
This checks that your manifest is valid, all referenced files exist, and the directory structure follows the conventions. The validator catches most structural issues before you encounter them during testing.
You can also validate from within a session using /plugin validate . while your working directory is the plugin root.
Preparing for Publication
Before submitting to the marketplace, your plugin needs three things beyond the code itself.
A README. The README is the first thing potential users see. It should explain what the plugin does, how to install it, and how to use each component. Do not assume the reader knows anything about your plugin beyond the title.
Here is a sample README for the code-quality plugin.
# Code Quality Plugin for Claude Code
A Claude Code plugin that brings code review, pre-commit checks, and test generation into your workflow.
## Features
- **Code Review Skill** - Structured review against configurable quality criteria
- **Pre-Commit Hook** - Catches debug statements, missing ticket references, and linting issues before commit
- **Test Generator** - Creates test stubs for any source file using your project's test framework
## Installation
Install from the Claude Code CLI:
/plugin install https://github.com/yourusername/code-quality
## Usage
### Review code quality
Activate the review skill, then ask Claude to review files or diffs.
/code-quality:review
### Generate test stubs
Point the command at a source file to generate corresponding test stubs.
/code-quality:generate-tests path/to/file.ts
### Pre-commit checks
The hook runs automatically when Claude commits code. No activation needed.
## Configuration
The pre-commit hook detects your project's toolchain automatically:
- Node.js projects: runs ESLint
- Python projects: runs Ruff
- Rust projects: runs Clippy
## Requirements
- Claude Code CLI
- Your project's linter installed locally (ESLint, Ruff, or Clippy)
A CHANGELOG. Start simple. For version 1.0.0, a single entry is enough.
# Changelog
## 1.0.0 - 2026-03-10
- Initial release
- Code review skill with structured quality criteria
- Pre-commit hook with debug statement detection and linter integration
- Test stub generator supporting JavaScript, TypeScript, Python, Rust, and Go
Version validation. Make sure your plugin.json version matches your CHANGELOG and any git tags you create. Inconsistent versioning is the most common reason for marketplace rejection.
Run the validator one final time.
claude plugin validate ./code-quality
Fix any issues it reports. The validator is strict about structure but does not evaluate the quality of your skill instructions or hook logic. That is what your testing phase covers.
Submitting to the Marketplace
The submission process has two paths depending on the scale of your plugin.
For the official Anthropic directory, submit through the plugin directory submission form. Your plugin will be reviewed against quality and security standards. The review team checks for several things.
Does the manifest follow the specification? Are all referenced files present? Does the plugin avoid accessing resources outside its own directory? Are the skill instructions clear and well-structured? Does the README adequately explain installation and usage?
External plugins are listed in the external_plugins/ directory of the official repository. Once approved, users can discover your plugin through the CLI's plugin search functionality.
The review process typically takes a few days. During that time, you can distribute your plugin directly via its GitHub URL. Users can install any plugin from a Git repository, regardless of whether it is listed in the official directory.
/plugin install https://github.com/yourusername/code-quality
This works immediately. The marketplace listing adds discoverability, not functionality.
Submission tips. The review team is thorough but fair. The most common feedback on rejected submissions falls into three categories.
First, the plugin references files outside its own directory. This is a security boundary. Your hooks and scripts must use ${CLAUDE_PLUGIN_ROOT} for all path references and never assume anything about the host system's file layout.
Second, the README is insufficient. "Install and use" is not enough documentation. Each component needs a clear explanation of what it does and how to invoke it.
Third, the plugin does too much. If your plugin has fifteen skills, seven commands, and a dozen hooks, consider splitting it into multiple focused plugins. The marketplace rewards specificity.
Creating Your Own Marketplace
If you are building plugins for a team or organisation rather than the public, you can host your own marketplace. This is useful when your plugins contain proprietary standards that should not be public, or when you want to curate a set of approved plugins for your engineering team.
A custom marketplace is defined by a .claude-plugin/marketplace.json file inside a git repository that your team can access. Create the file at .claude-plugin/marketplace.json in the root of your marketplace repository.
{
"name": "Acme Engineering Plugins",
"description": "Internal Claude Code plugins for Acme Corp engineering teams",
"plugins": [
{
"name": "code-quality",
"description": "Code review, pre-commit checks, and test generation",
"version": "1.0.0",
"repository": "https://github.com/acme-corp/claude-code-quality",
"keywords": ["code-quality", "review", "testing"]
},
{
"name": "acme-standards",
"description": "Acme Corp coding standards and architecture patterns",
"version": "2.1.0",
"repository": "https://github.com/acme-corp/claude-acme-standards",
"keywords": ["standards", "architecture"]
}
]
}
Host this repository on GitHub, GitLab, or any git server your team can reach. Users add your marketplace repository as a source in their Claude Code configuration, and your plugins appear alongside the official directory.
The plugin marketplaces documentation covers the full specification for custom marketplace files. For most teams, the simple format above is sufficient.
This approach works well in combination with the systemprompt.io skills and cowork system. You can use plugins for the Claude Code CLI and systemprompt.io skills for the Claude web interface, giving your team consistent standards across both environments.
The Lesson
The plugin system changes how teams think about Claude Code workflows. Before plugins, every improvement to a Claude Code setup was local. Better CLAUDE.md files, more sophisticated hooks, refined skill instructions. All of it lived in individual project directories and could not be easily shared.
Plugins make those improvements portable. A code-quality plugin built for one set of projects can run across dozens of repositories within a team. When the review criteria improve or a new check is added to the pre-commit hook, everyone gets the update on their next session.
The technical barrier to publishing is low. If you can write a markdown file and a JSON manifest, you can publish a plugin.
The harder part is the design work. Deciding what belongs in the plugin and what does not. Writing skill instructions that are specific enough to be useful but flexible enough to work across different codebases. Choosing the right granularity for hooks so they help without getting in the way.
Three principles apply broadly when designing plugins.
First, solve a specific problem. The best plugins in the marketplace do one thing exceptionally well. A plugin that formats SQL queries. A plugin that enforces commit message conventions. A plugin that generates API documentation from code comments.
Specificity is a feature.
Second, work with existing tools. The pre-commit hook in this plugin runs the project's own linter. It does not ship its own linting rules. This means it works with any project regardless of language or toolchain.
Plugins that try to replace existing tools create friction. Plugins that enhance existing tools create value.
Third, write instructions as if the reader knows nothing about your intent. Claude follows what you write, not what you mean. If your skill says "check for common issues", Claude will make its own judgment about what counts as common. If your skill says "check for functions longer than 40 lines", Claude will do exactly that. Precision in instructions translates directly to reliability in output.
Conclusion
The Claude Code plugin ecosystem is still early. With hundreds of plugins in the broader ecosystem and the official directory growing daily, there is already a foundation. But compared to mature plugin ecosystems in other tools, the surface area of what is covered remains small.
That is an opportunity. If you have built workflows in Claude Code that work well for you, packaging them as a plugin is straightforward. The directory structure is simple. The manifest is a single JSON file.
The testing workflow is built into the CLI. And the submission process, while it requires meeting quality standards, is open to anyone.
Start with something small. A single skill that captures your best code review practices. A hook that automates a check you currently do manually. A command that generates boilerplate you are tired of writing.
Get it working locally, validate it, and submit.
The plugin documentation covers everything not addressed in this guide. The plugin reference documents every manifest field and directory convention. And if you want to see how plugins fit into the broader Claude ecosystem alongside skills and the marketplace, the systemprompt.io platform ties it all together.
Your workflow improvements should not live and die in a single project directory. Package them. Publish them. Let other developers benefit from what you have learned.