CLAUDE COWORK. COMPLETELY OWNED. SKILLS, GATEWAY, MCP ALLOWLIST, PLUGINS. YOURS.
Connect Cowork over OAuth2 with PKCE for personal skills, or deploy systemprompt.io as the full enterprise governance plane: /v1/messages gateway, signed MCP allowlist, and org-plugins supply chain in one binary.
Personal Skills over OAuth2 + PKCE
Cowork opens with a blank slate. No memory of last session, no record of the brand voice you shaped last week, no trace of the client-report template you wrote on Tuesday. The systemprompt.io connection changes that: click connect, approve in a browser tab, and every skill on your account loads into every Cowork session.
The handshake uses OAuth2 with PKCE. An authorisation code returns from the systemprompt.io auth endpoint, Cowork exchanges it at the token endpoint, and the session binds. No API key lands on disk. PKCE binds the code exchange to the one-time secret Cowork generated, so a captured code cannot be replayed from another machine. Staff engineers verifying this read the OIDC discovery document, authorize endpoint, and token exchange — all in one binary you run yourself.
A skill is a Markdown document plus a small config block. Edit it in the dashboard and the next Cowork session sees the update. Type / in Cowork and your skills appear as slash commands. Cancel your account and you walk away with the Markdown files.
- OAuth2 connection, no API keys on disk — Approve the connection in a browser tab. A stolen laptop holds no static key to replay. Short-lived session JWTs identify the user, not keys to upstream accounts.
- Skills you own — Each skill is Markdown plus a config block. Edit in the dashboard and the next Cowork session picks it up. Export as files whenever you want them.
- Slash-command injection — Type / in Cowork and your skills appear as commands. The same injector runs whether you trigger a skill by slash command or an agent routes to it.
A /v1/messages Endpoint You Own
Cowork's third-party inference setting expects an HTTP endpoint that speaks /v1/messages and forwards a specific header set. systemprompt.io is that endpoint, running in your VPC in front of the upstream you operate. Your own inference cluster, a self-hosted Llama or Qwen deployment, an internal model behind a private ingress. The gateway authenticates the user, checks scope, scans the prompt for secrets, applies rate and budget limits, and writes an audit_events row with trace_id before a token leaves your network.
The same gateway fronts commodity providers when you need them (Bedrock, Vertex, Azure Foundry, Anthropic direct, OpenAI, Gemini, Groq) through one routing table keyed on department, model, cost, or failover. The switch between your own inference and a commodity provider is a YAML change, not a replatforming. Cost lands in one microdollar column regardless of upstream.
Anthropic's documented data flow says prompts route to your cloud provider and Anthropic never sees them. With systemprompt.io in front, the cloud provider never sees raw traffic either. It sees a policy-checked request from your governance layer, stamped with your attribution headers.
- /v1/messages in front of any upstream — Route to your own inference cluster or a self-hosted open-weight model first. Commodity providers sit behind the same gateway when you choose to use them.
- Per-user cost attribution — Every request carries the authenticated user and session on the way out. Microdollar cost joins back to the user in your database, so finance reads a single ledger.
- Policy before the upstream call — RBAC scope, secret scan, blocklist, and rate limit run before the request leaves your network. A compliance officer answers 'who authorised this prompt' from evidence that predates the model call.
Credential Helper, Per-User JWT
Cowork's third-party inference form exposes a credential helper script field whose contract is an absolute path to an executable that prints the credential on stdout. Most deployments skip it and ship one shared bearer to every laptop over MDM. One compromised device then exposes organisation-wide AI spend and audit identity.
systemprompt.io is the counterpart to that field. The helper is a small binary that trades the workstation's SSO identity for a short-lived, user-scoped JWT against the gateway's auth endpoint. Cowork invokes it per token expiry, reads the JWT from stdout, and uses it as the Authorization header on every /v1/messages request. The user never sees a key. Upstream credentials stay on the gateway. Revocation is a database update. Rotation happens on the next refresh.
The helper's JSON also carries seven canonical headers that Cowork merges into every request: x-user-id, x-session-id, x-trace-id, x-client-id, x-tenant-id, x-policy-version, x-call-source. Real identity propagates into every audit row. Paired with the Skip login-mode chooser toggle, a user opens Cowork and is signed in through the organisation's identity with no API-key paste step.
- No shared bearers in the field — The helper returns a fresh per-user JWT on demand. Upstream keys stay server-side. A laptop compromise does not expose organisation-wide credentials.
- SSO identity to gateway JWT — The helper trades the workstation's logged-in identity for a user-scoped JWT and emits seven canonical headers Cowork forwards on every /v1/messages call.
- One-line revocation — Revoke the user in systemprompt.io. Their Cowork session fails on the next token refresh with no MDM round-trip.
Signed, Central MCP Allowlist
Cowork on third-party inference supports remote MCP servers only against an admin-maintained allowlist. Tool policies are allow, ask, or block. isLocalDevMcpEnabled disables user-added servers. isDesktopExtensionSignatureRequired enforces signed extensions. The settings exist in Cowork. The signing authority, the registry, and the revocation path do not ship with it.
systemprompt.io fills those three gaps. Register each MCP server once, scope it by RBAC role or department, sign its manifest with your keys, and distribute the allowlist to every Cowork install from one source. Revoke a server centrally and every laptop converges on the next session. Tool policy evaluates per-principal and per-context, so the same Stripe tool can be allowed for the payments team and blocked for the same developer working on marketing.
Every allowed tool call writes into the same audit_events table as the inference request, linked by trace_id. "What MCP servers did this agent reach, and was it allowed to" becomes one SQL query.
- One registry, every device — Register each MCP server once. The allowlist distributes via MDM or a polling plugin, so add, scope, or revoke centrally and every install converges.
- Per-principal tool policy — Cowork's allow/ask/block is per-tool and per-device. systemprompt.io evaluates each tool call against the authenticated principal and session context, so the same tool can be allowed for one team and blocked for another.
- Signed manifests — MCP manifests sign against your keys. isDesktopExtensionSignatureRequired becomes a live check. A tampered manifest fails before Cowork opens the session.
Governed org-plugins Supply Chain
Cowork distributes plugins (skills, commands, subagents, MCP servers) through a local mount: /Library/Application Support/Claude/org-plugins/ on macOS, C:\\ProgramData\\Claude\\org-plugins\\ on Windows. The manifests are plugin.json and .mcp.json. The mechanism ends at "files on disk". Provenance, revocation, version history, and department scoping are left to the enterprise.
systemprompt.io is the supply chain behind that mount. Authors sign plugins in the dashboard. Versions land in your database. Entitlement scopes by RBAC role and department. A sync agent on each laptop writes only the user's entitled set into the org-plugins/ path. An update propagates on the next Cowork session. A withdrawal removes the plugin from every install before the next invocation.
The local directory is the delivery mechanism Anthropic specifies. The marketplace (browse, install, fork, publish, review) lives in systemprompt.io, so authors ship once and distribute everywhere under one policy.
- Per-user plugin sets — Finance sees finance plugins. Engineering sees engineering plugins. The sync agent writes only the entitled set into org-plugins/, so a laptop is not a library of things its owner is not authorised to run.
- Signed, versioned, revocable — Every plugin manifest is signed. Every version is stored. Revocation is a single dashboard action. A compromised plugin disappears on the next sync.
- Marketplace behind the mount — The local org-plugins directory is the delivery path. The marketplace lives in systemprompt.io, so your team authors skills and plugins once and distributes them everywhere.
Evidence on Every Tool Call
Cowork emits OpenTelemetry metrics on third-party inference. Useful for dashboards. Insufficient when an auditor asks which prompt caused a tool call, who authorised it, and what the model returned. systemprompt.io captures the full lineage (prompt, completion, tool calls, MCP invocations, cost) as structured JSON, keyed on trace_id, stored in your PostgreSQL.
The same trace_id appears on the gateway log row, the MCP call row, the plugin load event, and the session record. Forward the JSON stream to Splunk, ELK, Datadog, or Sumo Logic and the SIEM ingests it without a custom parser. Query it from the systemprompt.io CLI for ad-hoc work. Export CSV for an auditor. The record is the same, surfaced differently.
- Prompt, tool call, MCP, cost lineage — Every AI request writes a structured row. Every tool call and MCP invocation that follows links by trace_id. 'What did this agent do' is a single JOIN.
- SIEM-ready JSON — Structured events for Splunk, ELK, Datadog, Sumo Logic. No custom parsers. Your security team treats AI activity as a system event.
- Identity on every row — Every record carries the authenticated user, session, agent, and plugin. Anonymous AI activity is architecturally impossible.
MDM-Friendly Rollout
Cowork reads a macOS .mobileconfig under com.anthropic.claudefordesktop and a Windows .reg under HKCU\\SOFTWARE\\Policies\\Claude. systemprompt.io ships template profiles for both. IT sets the gateway URL, delivers the helper binary path, and applies the profile. Cowork launches with the /v1/messages endpoint selected, the MCP allowlist URL populated, the plugin sync agent bootstrapped, and tool policies set from central policy.
A user opens Cowork for the first time and lands in a governed environment. No Developer-mode toggle. No API-key paste. No "which MCP server should I add" question.
Founder-led. Self-service first.
No sales team. No demo theatre. The template is free to evaluate — if it solves your problem, we talk.
Who we are
One founder, one binary, full IP ownership. Every line of Rust, every governance rule, every MCP integration — written in-house. Two years of building AI governance infrastructure from first principles. No venture capital dictating roadmap. No advisory board approving features.
How to engage
Evaluate
Clone the template from GitHub. Run it locally with Docker or compile from source. Full governance pipeline.
Talk
Once you have seen the governance pipeline running, book a meeting to discuss your specific requirements — technical implementation, enterprise licensing, or custom integrations.
Deploy
The binary and extension code run on your infrastructure. Perpetual licence, source-available under BSL-1.1, with support and update agreements tailored to your compliance requirements.