Prelude

Yesterday a CTO at a 100-person company tried to install our plugin in Claude Cowork. The same plugin, the same URL, the same installation flow that had worked for weeks. This time, Cowork returned an error: the URL was not a GitHub or similar URL.

No warning. No changelog entry. No migration guide. No email to developers. No deprecation notice. Nothing. A URL that installed perfectly on Monday was silently rejected on Tuesday.

We spent three hours debugging. Was it a DNS issue? A certificate problem? A permissions change? We tried different networks, different accounts, different machines. The URL worked in Claude Code CLI. It worked when you curled it directly. The marketplace JSON was valid. The plugin structure was correct.

The URL just was not GitHub. That was the only problem. Anthropic had added validation to reject any marketplace URL that did not point to github.com. And they had done it without telling anyone.

This is the story of what happened, why it matters, and what it reveals about the state of agentic AI platforms in 2026.

The Problem

We build systemprompt.io, a platform that provides personalised skills, hooks, and agents for Claude through the Anthropic Marketplace. Our plugin installs in both Claude Cowork (the web interface) and Claude Code (the CLI). It uses HTTP hooks to provide analytics, governance, and real-time skill synchronisation. Until yesterday, everything worked.

Then two things broke simultaneously.

First, Cowork's "Add marketplace" dialog started rejecting any URL that was not hosted on GitHub. Our marketplace URL, which serves a standard git repository over HTTPS exactly as the plugin documentation specifies, was blocked. Claude Code CLI continued to accept it. The same URL, two different products, two different outcomes.

Second, HTTP hooks for non-enterprise plugins stopped functioning in Cowork. Hooks that had been firing reliably for weeks, sending analytics and governance events to our servers, went silent. Again, Claude Code CLI was unaffected.

The customer who discovered this was mid-onboarding. They had already authenticated, already configured their skills, already started using the plugin in Claude Code. They were adding it to Cowork as the final step. And it just stopped working.

The Debugging Timeline

The email exchange tells the story better than any summary.

6:30 PM The customer reports they cannot install. We assume it is a bug on our side. Their display name contained accented characters (French name), which our marketplace export had passed through into the machine-readable slug. Claude Code's schema validation only accepts ASCII in the name field. A genuine bug on our end. We patched it within the hour. The export now strips non-ASCII from identifiers while preserving the display name for humans.

9:09 PM The customer tries again with the fixed marketplace. Claude Code works. But in Cowork, they cannot find "Add marketplace by URL" anywhere in the interface. They can see the marketplace in Claude Code's /plugins output. But Cowork does not show it. The two products do not share marketplace state.

9:14 PM We discover that Cowork has a UI bug where the "Add by URL" option only appears after you install at least one default plugin from Anthropic. A known issue on GitHub. We walk the customer through the workaround.

9:20 PM The customer reaches the "Add by URL" dialog. Types in the marketplace URL. Gets the error: URL is not a GitHub or similar URL.

9:31 PM We confirm the same behaviour on our own machines. This is not a customer-specific issue. It is a platform change. We do not know if it is a bug or a policy decision. There is no announcement anywhere.

Three hours of debugging. A customer who is a CTO of a 100-person company, sitting at their desk at 9 PM, trying to install a plugin that worked two days ago. And the answer turns out to be: Anthropic changed the rules and did not tell anyone.

The Broader Pattern

This is not an isolated incident. The Claude Code GitHub issues tell a consistent story of Cowork-specific regressions and undocumented changes.

Marketplace registration breaks existing user hooks. Installing the official Anthropic marketplace causes UserPromptSubmit hook errors and PostToolUse bash errors even when all plugins are disabled. Uninstalling the marketplace resolves the issue. The marketplace itself conflicts with the hook system it is supposed to support.

HTTP hooks fail with certificate verification errors when servers use Let's Encrypt ECDSA certificates. The bundled TLS implementation ignores the system certificate store. A server that every browser and every curl command trusts is rejected by Claude Code's hook system.

Environment variables in HTTP hook URLs fail at load time. If you use ${API_URL} in your hook configuration, the entire plugin's hooks fail with an invalid_format error before the variable is even interpolated. The feature is documented. It just does not work.

Plugin hooks fire twice for every event. SessionStart, UserPromptSubmit, PostToolUse, Stop. Every hook fires once from the marketplace source and once from the installed cache. Double the API calls, double the side effects, no acknowledgement.

Plugin initialisation failures are reported as hook errors. When a plugin fails to load, Claude Code tells you your SessionStart hook has an error. The actual problem, a plugin that could not initialise, is hidden behind a misleading message. You debug the hook. The hook is fine. The plugin never loaded.

The problems extend beyond hooks into the marketplace itself. Anthropic's own repositories have been rejected by marketplace name validation, locking out even first-party content. After a Claude Code upgrade, plugins from the marketplace silently stopped working due to format incompatibility, with no migration path provided. The desktop app started showing Cowork plugins instead of CLI plugins after an update, mixing two separate ecosystems in the UI. Cowork specifically cannot add private GitHub marketplaces that work perfectly in Claude Code CLI.

Even the official claude-plugins-official marketplace fails to load entirely when running /plugin in Claude Code. The Cowork marketplace is completely empty on Windows because the DNS for plugins.claude.ai does not resolve. And there is no path to force-update a stale marketplace cache in Cowork on Windows.

On the hooks side, PreToolUse hooks fail silently where the model bypasses "blocking" hooks without detection. Command hooks are silently dropped for PreToolUse and PostToolUse events when defined in plugin hooks.json files. A hook that works in user settings does not work in a plugin. No error, no warning, just silence.

Each of these issues is individually manageable. Together, they paint a picture of a platform that is growing faster than its engineering can stabilise.

The Journey

Why This Matters Beyond Our Plugin

Before getting into the technical details, it is worth understanding why HTTP hooks and marketplace URLs matter to the Claude ecosystem at all. They are not niche features. They are the only mechanism that gives organisations visibility into what Claude does on their behalf.

When an employee uses Claude Code to review a pull request, an HTTP hook can log which files were read, which tools were invoked, and what the outcome was. When a team member uses a skill to draft customer communications, hooks can enforce brand guidelines and flag compliance issues before the content leaves Claude's context. When a contractor runs Claude in a codebase, hooks can prevent access to sensitive directories or block certain command patterns.

Without hooks, Claude is a black box. Organisations that care about governance, compliance, or even basic operational visibility cannot use Claude in any capacity that touches regulated data or critical business processes. Hooks are not a developer convenience feature. They are the governance layer.

Similarly, marketplace URLs are not about convenience. They are the distribution mechanism for enterprise plugin configurations. When an IT team configures a Claude plugin with specific policies, skills, and hook endpoints, they need to push that configuration to every team member. The marketplace URL is the delivery mechanism. Without it, every employee installs a ZIP file manually. For a company of 100 people, that is 100 individual installation sessions, 100 opportunities for mistakes, and no way to push updates.

Restricting these features is restricting enterprise adoption. Whether that is intentional or an unintended consequence of security work, the impact is the same.

What Actually Changed

Based on our testing and customer reports, here is what we believe happened on or around 25 March 2026.

Marketplace URL validation. Cowork's "Add marketplace" dialog now validates the URL against a whitelist of known git hosting providers. GitHub passes. GitLab may pass (we have not tested). Arbitrary HTTPS URLs to self-hosted git servers fail. Claude Code CLI is unaffected and continues to accept any valid git URL, including HTTPS URLs to self-hosted servers.

HTTP hook restrictions. HTTP hooks defined in non-enterprise plugins appear to be silently ignored in Cowork. Command hooks (shell scripts) still fire. HTTP hooks (calling external URLs) do not. Again, Claude Code CLI is unaffected.

No documentation update. As of this writing, the hooks documentation still describes HTTP hooks as a general feature available to all plugins. The plugins documentation still describes marketplace URLs without mentioning a GitHub requirement. The Cowork plugin management documentation mentions GitHub for org marketplaces but does not state it is the only option.

No changelog entry. The Claude Code changelog, which documents changes like model updates and flag deprecations, has no entry about marketplace URL restrictions or HTTP hook changes.

The Security Argument

Let us be honest about why Anthropic probably made these changes. The security rationale is real.

Auto-syncing plugins from arbitrary git servers is a supply chain attack vector. A compromised server could push malicious code that lands on every user's machine at the next sync. GitHub provides audit trails, abuse reporting, two-factor authentication, and a trust infrastructure that random HTTPS endpoints do not.

HTTP hooks calling arbitrary URLs are a data exfiltration risk. A malicious plugin could install a PreToolUse hook that sends every file Claude reads, every command Claude runs, and every prompt the user types to an attacker-controlled server. The user would never know. The hooks fire silently in the background.

These are not hypothetical risks. Security researchers have already demonstrated real attacks. Prompt Security showed how a marketplace plugin can silently hijack dependency installation in Claude Code, redirecting to trojanised packages. Snyk's ToxicSkills research found prompt injection in 36% of AI agent skills tested and 1,467 malicious payloads across the ecosystem. ReversingLabs documented how agentic AI's rapid adoption increases software supply chain risks, with hundreds of malicious skills flooding agent marketplaces.

The threat is real. The same supply chain concerns that drove npm, PyPI, and every other package registry to invest in signing, provenance, and trust chains apply here. The agentic AI ecosystem is younger and moves faster, which makes the risk arguably greater.

Anthropic's managed settings already provide enterprise controls for this. The allowManagedHooksOnly setting restricts hooks to those defined by the organisation. The allowedHttpHookUrls setting provides a URL pattern allowlist. The strictKnownMarketplaces setting restricts which plugin sources are accepted. These are sensible, granular controls that give administrators exactly the tools they need.

So yes, the security concerns are valid. The question is not whether to address them. The question is how.

Why the Approach Is Wrong

Silent breaking changes are the worst possible way to address security concerns. Here is why.

It destroys platform trust. Every developer who builds on Claude Code and Cowork makes an implicit bet that the platform will behave predictably. That their integrations will continue to work. That if something changes, they will be told about it in advance, given time to adapt, and provided with a migration path. When you break that contract silently, you are not just breaking a feature. You are breaking the trust that makes an ecosystem possible.

It provides no migration path. Our marketplace URL worked on Monday. It stopped working on Tuesday. What are we supposed to do? Mirror to GitHub? Change our entire distribution architecture? The answer might be yes. But we were not told. We were not given a timeline. We were not even told the change was intentional rather than a bug. As of this writing, we still do not know for certain.

It reveals no policy. What URLs are allowed now? Is it GitHub only? GitHub and GitLab? GitHub and any "major" git hosting provider? What counts as "similar"? The error message says "GitHub or similar URL" without defining similar. What about GitHub Enterprise Server instances? The org marketplace documentation says they are not supported. So it is specifically github.com, and nothing else?

For HTTP hooks, the questions are the same. Are they disabled for all non-enterprise plugins? Just in Cowork? Just for certain events? Is there a way to opt in? Is there a review process? We do not know, because no one told us.

Cowork and Claude Code are diverging silently. The same feature works in one product and not the other, with no documentation of the difference. A developer who tests their plugin in Claude Code sees everything working. They ship it to customers who use Cowork. It breaks. The developer looks like they shipped a broken product. The customer loses confidence. And the actual cause is a platform divergence that is not documented anywhere.

The fundamental paradox. Claude Code and Cowork run on the user's machine. The agents execute locally. The user owns the runtime environment. Restricting what hooks can call or what URLs are valid for marketplace sources does not prevent a determined actor from doing anything. It prevents legitimate developers from building useful integrations.

A malicious actor will proxy through an allowed domain, or use a command hook instead of an HTTP hook, or distribute their code through a channel the platform has not thought to restrict yet. Security through restriction of legitimate use cases is a losing game. The only thing it achieves with certainty is making life harder for the people who are trying to build something useful.

This does not mean Anthropic should do nothing about security. It means the approach should be transparency and tooling, not silent restrictions. Publish a security policy. Provide signing and verification for plugins. Build a review process. Give administrators the controls they need (which, to Anthropic's credit, managed settings already do). But do not silently break working integrations and call it security.

How Other Platforms Handle This

The way Anthropic handled this change is not how mature platforms operate. It is worth looking at how others manage the same tension between security and developer trust.

npm and the left-pad incident. In 2016, a developer unpublished a widely-used package from npm, breaking thousands of builds worldwide. npm's response was not to silently restrict what developers could publish. They published a clear policy on package unpublishing, added a 72-hour grace period, and created an escalation process. They made the rules visible.

Chrome extension Manifest V3. Google deprecated Manifest V2 extensions over a multi-year timeline. They published the policy in 2019, provided migration guides, set a sunset date, extended it when developers needed more time, and communicated every change through the Chrome blog and developer channels. The change was controversial. But no developer woke up to find their extension silently broken.

Stripe API versioning. Stripe versions every API change and lets developers pin to a specific version. Breaking changes are documented months in advance. Migration guides are published alongside the announcement. Developers can test against the new version before it becomes mandatory. The old version continues to work until a published sunset date.

These are not small companies with simple products. They face the same security and stability challenges Anthropic faces. They chose communication over surprise. The fact that agentic AI is a younger ecosystem does not justify a lower standard of developer relations. If anything, it demands a higher one, because the trust is not yet established.

The Workarounds We Built

While we wait for clarity from Anthropic, we have had to engineer around the restrictions. Here is what we are doing for our customers who need Cowork support.

ZIP-based distribution. For Cowork users who cannot add our marketplace URL, we generate a personalised plugin ZIP file from the systemprompt.io dashboard. The ZIP contains the user's skills, hooks, MCP server configuration, and authentication token, all baked in at generation time. The user downloads it and drags it into Cowork. It works. It is not ideal because updates require re-downloading, but it is reliable.

Claude Code CLI as the primary channel. For technical users, we recommend adding the marketplace via Claude Code CLI, which still accepts arbitrary git URLs. The plugin works fully in CLI sessions. The limitation is that Cowork and Claude Code do not share marketplace state, so the plugin appears in one and not the other.

MCP server as the skill delivery mechanism. Since our MCP server is already authenticated and running in user sessions, we can deliver skill content dynamically through tool calls rather than static files. When a user invokes a skill, Claude calls our server, which returns the current personalised content. This bypasses the marketplace entirely for skill delivery, though it does not solve the hook problem.

SessionStart command hooks for token injection. For hooks that need user identity, we use a SessionStart command hook (shell script) rather than an HTTP hook. The script calls our server, retrieves a short-lived token, and writes it to $CLAUDE_ENV_FILE. Subsequent HTTP hooks in other events can then use the token. This works in Claude Code CLI but has a known bug with session resume where environment variables from SessionStart are lost.

None of these workarounds are as clean as the original architecture where a marketplace URL and HTTP hooks handled everything. Each one adds complexity, increases the surface area for bugs, and makes the onboarding experience worse for customers. But they work, and they demonstrate a principle: if you keep your business logic on your own server, you can adapt to platform changes without rewriting everything.

The "Research Preview" Problem

Cowork launched on 30 January 2026 with the label "research preview." That label has done a lot of heavy lifting for two months. It is the implicit excuse for every breaking change, every undocumented regression, every feature that works in one product and not another.

The problem is that production teams are building on it. And the frustration is mounting. One developer wrote that Anthropic's poor communication has eroded developer trust, with communities using words like "fraud" and "gaslighting" to describe the experience of building on a platform that changes underneath you. Another documented five common hook configuration errors that fail silently, leaving developers thinking they are protected when they are not.

The CTO who emailed us at 9 PM is not running a research experiment. They are evaluating Claude as infrastructure for a 100-person company. They are making procurement decisions. They are building workflows. They are training their team.

When you invite developers to build plugins, publish documentation for how to build them, and provide a marketplace for distributing them, you have created a platform. Platforms have obligations. The "research preview" label does not absolve those obligations. It just means you should communicate more, not less, about what is changing and why.

Apple's App Store has review guidelines that are public, versioned, and updated with explanation. AWS has deprecation policies with minimum notice periods. Even beta products from smaller companies typically maintain a changelog that documents breaking changes before they ship. Not after. Not never.

A two-line entry in the changelog would have prevented everything that happened yesterday. Something like: "Cowork now requires marketplace URLs to be hosted on a supported git provider (currently github.com). HTTP hooks in plugins require managed settings approval in Cowork. See migration guide." That is all it would have taken.

The Lesson

What Anthropic Should Do

Publish a security policy for the plugin ecosystem. Not just managed settings documentation, but an actual policy document that explains what is allowed, what is restricted, and why. Make it versioned. Update it when things change. Link to it from the plugin documentation.

Provide deprecation notices. When a feature is going to be restricted or removed, announce it in advance. Two weeks is the minimum for non-critical changes. Longer for changes that affect distribution architecture. Include the rationale. Include the migration path.

Document the differences between Claude Code and Cowork. If a feature works in one product and not the other, say so explicitly. Do not let developers discover it through customer complaints.

Create a developer relations channel. The GitHub issues are not a substitute for a developer programme. Plugin developers who are building businesses on the platform need a way to hear about changes before they ship, not after.

Consider a plugin review process instead of blanket restrictions. Apple does not ban all third-party apps because some might be malicious. They review them. The same model could work for Claude plugins. Review and approve marketplace sources rather than restricting all non-GitHub URLs.

What Developers Should Do

The lesson for developers building on agentic AI platforms is architectural resilience.

Keep your business logic on your own servers. Do not embed critical logic in plugin files that the platform can change, restrict, or break. Use the plugin as a thin client that calls your API. When the platform changes, you change a file. Your server, which you control, continues to work.

Build for the lowest common denominator. If it works in Cowork and Claude Code today, it might not tomorrow. Design your architecture to degrade gracefully. If HTTP hooks stop working, can your plugin still function? If marketplace URLs are restricted, do you have an alternative distribution method?

Monitor the GitHub issues. The Claude Code issues repository is the closest thing to a changelog for unannounced changes. Subscribe to issues tagged with hooks, plugins, and marketplace. You will often learn about breaking changes from other developers' bug reports before you hit them yourself.

Test in Cowork specifically. Do not assume that Claude Code CLI behaviour matches Cowork. They are diverging. Test your plugin in both products before every release.

Version your plugin distribution. Keep a record of what you shipped, when, and to which platform. When something breaks, you need to know whether it was your change or theirs. If you can diff your plugin at the moment it stopped working against the version that worked, you can isolate the platform change immediately instead of spending three hours wondering if you broke something.

Have a fallback distribution channel. If your marketplace URL stops working, can you ship a ZIP? If your HTTP hooks are blocked, can you fall back to MCP tool calls for critical functionality? If Cowork restricts something, can your users still access the plugin through Claude Code CLI? Every dependency on a platform feature should have a plan B. This is not paranoia. It is what yesterday taught us.

Talk to other plugin developers. The Claude plugin ecosystem is small enough that most developers building plugins know each other or could. When one developer discovers a breaking change, that knowledge should spread fast. A shared channel, a Discord server, a mailing list, something where "they just broke marketplace URLs" can travel in minutes instead of hours.

The Bigger Picture

The tension between platform control and developer freedom is not new. Every platform in history has faced it. What is new is the speed at which agentic AI platforms are evolving and the stakes involved.

Claude Code plugins have access to a user's filesystem, their code, their terminal, their git history, their environment variables. A malicious plugin can read credentials, exfiltrate source code, or modify files. The security surface is genuinely large.

But the solution is not to silently restrict what legitimate developers can build. The solution is transparency. Publish the threat model. Provide the tools for developers to build securely. Establish trust through process, not through walls.

The Local Execution Paradox

There is a deeper architectural issue that makes blanket restrictions particularly futile. Claude Code and Cowork run on the user's machine. The agent executes locally. It reads local files, runs local commands, and interacts with local processes. The user, or their organisation, owns the runtime environment.

This is fundamentally different from a cloud API where the platform controls the execution environment. When Claude runs on your machine, you are the gatekeeper. The platform can add friction, but it cannot prevent a sufficiently motivated actor from doing anything. A command hook can make the same HTTP request that an HTTP hook would make. A shell script can send data anywhere. A locally-installed MCP server can proxy any request.

Restricting HTTP hooks to GitHub-hosted plugins does not prevent data exfiltration. It prevents legitimate developers from using the documented, auditable, permission-controlled mechanism for sending data. The alternative, having developers route the same data through command hooks or MCP servers, is less visible, less controllable, and harder to audit. The restriction makes the ecosystem less secure, not more.

The correct security model for locally-executed agents is defence in depth: permissions, audit logging, managed settings that let organisations control what is allowed, and transparency about what the agent is doing. Anthropic already has most of this infrastructure in managed settings. The missing piece is not more restrictions. It is more communication about the restrictions that exist.

The Ecosystem at an Inflection Point

The agentic AI ecosystem is at an inflection point. Platforms like Claude are inviting developers to build extensions, plugins, skills, and integrations. That invitation creates a social contract. If you invite people to build, you owe them the stability and communication to build confidently. If you cannot provide that stability because the platform is moving too fast, say so explicitly. Tell developers what is changing, when, and why. Give them time to adapt.

This is not just an Anthropic problem. Every agentic AI platform is navigating the same tension. OpenAI's GPT Actions have their own set of undocumented restrictions. Google's Gemini extensions ecosystem is nascent. The entire industry is figuring out how to balance openness with security, speed with stability, innovation with trust.

The platforms that win this race will not be the ones with the most features or the fastest models. They will be the ones that developers trust. Trust is built slowly and destroyed quickly. Yesterday's silent breaking change cost more than three hours of debugging time. It cost a small but real amount of the trust that developers place in Anthropic as a platform partner.

The alternative is what happened yesterday. A CTO sitting at their desk at 9 PM, staring at an error message that did not exist the day before, wondering whether the platform they chose can be trusted.

Conclusion

We fixed our accent-handling bug in under an hour. We identified it, patched the export logic, deployed the fix, and emailed the customer with an explanation. The same customer then spent two more hours hitting a wall that Anthropic put up without telling anyone.

The irony is not lost on us. We are building governance infrastructure for AI. Our entire product is about giving organisations control and visibility over how AI agents behave. The marketplace, the hooks, the analytics, they exist because organisations need to know what their AI tools are doing. And the platform we build on just demonstrated exactly why that governance is necessary, by changing the rules without informing the people who depend on them.

We are going to keep building. The problem we are solving, giving organisations control over AI agent behaviour, is too important to abandon because the platform has rough edges. We built on Claude because the technology is extraordinary. The plugin system, when it works, is the most powerful AI extensibility framework available. HTTP hooks, when they fire, provide exactly the governance primitives that enterprises need.

But trust is not optional. It is the foundation of every ecosystem. And right now, the foundation is shifting without warning.

To the team at Anthropic: we are not adversaries. We are building the tools that make your platform more valuable to enterprises. The governance layer, the analytics, the personalised skills, they exist because Claude is good enough that organisations want to deploy it seriously. Help us help you. Publish the policy. Give us a seat at the table. And please, next time you change the rules, tell us first.

To the developers building on Claude: design for resilience. Keep your logic server-side. Test in every environment. Watch the GitHub issues. And build the thing anyway, because the opportunity is real even if the ground beneath it is still settling.

The CTO who emailed us at 9 PM? They are still onboarding. They still want the plugin. They are working around the restrictions using Claude Code CLI while we figure out the Cowork story. That is the thing about building on quicksand. You do not stop building. You just learn to build lighter, faster, and with very good foundations of your own.