Prelude
Anthropic shipped real governance tooling with Claude Enterprise. This is not a marketing page with vague promises about "enterprise-ready AI." The managed policy settings, spend caps, SSO integration, compliance API, and role-based permissions are genuine controls that solve genuine problems. For many organisations, they are sufficient.
Disclosure: I built systemprompt.io, a self-hosted AI governance platform. I have an obvious interest in the comparison that follows. I will do my best to be fair, because dishonest comparisons help nobody and waste the time of anyone evaluating these options seriously. Where Claude Enterprise does the job, I will say so. Where it does not, I will explain specifically why.
This guide exists because we keep having the same conversation with CTOs and engineering leads. They have seen the Claude Enterprise feature list. They understand the managed settings. What they cannot figure out from the marketing pages alone is where those controls stop and what sits beyond them. Not because Anthropic is hiding anything, but because the boundary between "sufficient governance" and "we need more" depends entirely on what your compliance requirements actually demand.
If your organisation runs Claude and only Claude, has no air-gap requirement, and needs config-level policy management rather than real-time enforcement, Claude Enterprise may be everything you need. Read this guide to confirm that, or to understand exactly where the gap appears if your situation is more complex.
Claude Enterprise Is Real Governance
It would be easy to write this section as a strawman. To dismiss Claude Enterprise governance as a checkbox exercise or a thin layer of admin settings designed to look good in a procurement deck. That would be dishonest.
What Anthropic has built is substantive. The managed policy settings give administrators genuine control over what Claude Code can and cannot do across an organisation. Tool permissions can be configured centrally. File access patterns can be restricted. MCP server configurations can be managed at the organisational level rather than leaving each developer to configure their own.
This is not trivial. Before managed settings existed, enterprise Claude Code deployments relied on trust and convention. Developers configured their own tool permissions. File access boundaries were enforced by hope. MCP servers were configured per-project with no central visibility. Managed settings replaced hope with policy, and that matters.
The spend controls are equally real. Per-user spend caps prevent runaway costs. Self-serve seat management lets teams scale without procurement cycles. Usage dashboards provide visibility into who is spending what and where.
SSO with SCIM provisioning means that when someone leaves the organisation, their Claude access is revoked automatically through the same identity provider that handles everything else. Role-based permissions mean that junior developers do not have the same tool access as senior engineers. The compliance API provides programmatic access to usage data in real time, not monthly reports or CSV exports.
For an organisation that has standardised on Claude as its AI provider and does not operate in a heavily regulated industry that requires on-premise deployment, this is a credible governance stack. I would not tell such an organisation they need something more. They probably do not.
What Claude Enterprise Governance Includes
To make this comparison useful, here is a specific breakdown of what Claude Enterprise provides. Not from marketing copy, but from the actual feature set available to Enterprise customers.
Managed Policy Settings
Administrators can define organisation-wide policies for Claude Code. These include tool permissions that control which tools are available to which roles, file access patterns that restrict what parts of the codebase Claude can read or write, and MCP server configurations that determine which external integrations are permitted. These policies are pushed centrally and apply to all users without requiring individual configuration.
This is config-level governance. Policies are defined, distributed, and enforced through configuration. They control what Claude is allowed to do in the same way that an IAM policy controls what an AWS role is allowed to do. The policies are evaluated when a session starts, not on every individual tool call.
Spend Controls
Per-user spend caps set maximum token expenditure. When a user hits their limit, usage is throttled or paused. Self-serve seat management allows team leads to add and remove users without waiting for IT procurement. Usage dashboards provide daily and weekly visibility into consumption patterns.
The spend controls are per-user. They are effective for preventing individual runaway costs. They do not natively support per-department, per-project, or per-model attribution. If your finance team needs to know how much the backend team spent on Opus versus how much the frontend team spent on Sonnet, you will need to build that reporting yourself using the compliance API data.
Identity and Access
SSO integration supports SAML and OIDC providers. SCIM provisioning automates user lifecycle management. When someone is offboarded in your identity provider, their Claude Enterprise access is revoked in the same workflow. Role-based permissions allow different levels of access based on organisational role. IP allowlisting restricts access to known corporate networks or VPN endpoints.
This is standard enterprise identity infrastructure, and it is done well. The SCIM integration in particular removes a class of security risk that plagues most SaaS AI tools, where former employees retain access because nobody remembered to revoke it manually.
Compliance and Audit
The compliance API provides real-time programmatic access to usage data. Conversations, tool usage, token consumption, and policy violations can be queried programmatically. Custom data retention policies allow organisations to define how long conversation data is stored. Selective deletion capabilities enable removal of specific conversations or time ranges.
Anthropic maintains SOC 2 Type II compliance. HIPAA-ready options are available for healthcare organisations that need a Business Associate Agreement. Data processing agreements are available for GDPR requirements.
Deployment Options
Claude Enterprise itself is SaaS hosted by Anthropic. For organisations that need data to stay within their own cloud account, Claude is available through Amazon Bedrock, Google Vertex AI, and Azure. These cloud provider deployments keep inference data within the customer's cloud account and subject to their cloud provider's compliance certifications.
This is a genuine option for organisations whose compliance requirement is "data must not leave our cloud account" rather than "data must not leave our physical infrastructure." It is an important distinction that we will return to.
Where Claude Enterprise Stops
Here is where the comparison becomes more nuanced. Claude Enterprise governance is excellent at what it does. But it was designed to govern Claude, and only Claude, within the boundaries of a SaaS deployment model. Specific capabilities that fall outside its scope:
Single-Provider Governance
Claude Enterprise governs Claude. If your organisation also uses OpenAI for certain workloads, Google Gemini for multimodal tasks, or runs local models for sensitive data processing, those providers are not covered. You have Claude Enterprise governance for your Claude usage, and nothing for everything else.
This is not a criticism. It would be unreasonable to expect Anthropic to govern competing products. But it does mean that any organisation using multiple AI providers needs a separate governance layer for the providers that Claude Enterprise does not cover, or a provider-agnostic layer that covers all of them including Claude.
In practice, multi-provider setups are increasingly common. Teams use Claude for complex reasoning, GPT-4 for certain API integrations, Gemini for multimodal work, and local models for data that cannot leave the building. Each of those providers has its own (or no) governance tooling, creating a fragmented governance landscape.
SaaS-Only Deployment
Claude Enterprise is SaaS. The Bedrock, Vertex, and Azure options keep inference data within your cloud account, but they still require outbound connections to cloud infrastructure. There is no option for true air-gapped deployment where the governance platform runs entirely within your own data centre with zero outbound connections.
For most organisations, cloud deployment is fine. For financial institutions with trading floor restrictions, healthcare organisations processing patient data under strict interpretations of HIPAA, government agencies operating within classified environments, and defence contractors subject to ITAR or similar controls, "fine" is not sufficient. These organisations need governance infrastructure that runs behind their firewall, on their hardware, with no external dependencies.
Config-Level vs Real-Time Enforcement
Claude Enterprise managed settings are configuration. They define what is permitted and what is not, and those definitions are applied when a session begins or when policy is evaluated. They are not a synchronous enforcement layer that evaluates every individual tool call before it executes.
The difference matters in regulated environments. Config-level governance says "this user is allowed to use the file write tool." Real-time enforcement says "this specific file write, to this specific path, with this specific content, is evaluated against policy rules before the write happens." The first is access control. The second is execution governance.
Consider a scenario where a developer has file write permission but accidentally includes a database connection string in a code comment. Config-level governance does not intercept this because the user has write permission. Real-time enforcement catches it because a secret detection rule evaluates the content before the write executes.
SIEM Integration
Claude Enterprise provides a compliance API that returns usage data programmatically. This data can be fed into SIEM systems like Splunk, ELK, or Datadog. But the integration requires custom development. The compliance API does not emit structured JSON events in the format that SIEM platforms natively ingest.
The practical difference is engineering effort. With the compliance API, your security team builds a polling integration that queries the API on a schedule, transforms the response into the event format your SIEM expects, and pushes it into your pipeline. This works, but it is custom code that needs maintenance, monitoring, and updates when either the compliance API or your SIEM configuration changes.
Native SIEM integration means the governance platform emits structured events directly, in the format your SIEM already ingests, through output paths that your infrastructure team already operates. No polling. No transformation. No custom integration code.
Secret Detection at the Tool Call Layer
Claude Enterprise does not include secret detection that operates at the tool call layer. If a developer's prompt results in Claude generating code that contains an API key, database password, or cloud credential, there is no governance mechanism that intercepts the output before it is written to a file.
Anthropic's own safety systems prevent Claude from intentionally leaking secrets, and Claude will generally refuse to output credentials it detects. But there is a difference between an AI model's behavioural guardrails and a deterministic governance rule that pattern-matches 35+ secret formats against every tool call output regardless of what the model decides to do.
Skill Marketplace and Knowledge Management
Claude Enterprise provides the tools to use Claude effectively. It does not provide a centralised marketplace for sharing skills, prompts, or governance configurations across an organisation. Teams using Claude Enterprise share knowledge through their own internal documentation, wikis, and repositories.
For large organisations where multiple teams independently develop Claude workflows, the absence of a centralised skill marketplace means duplicated effort. Three different teams build three different code review skills. Two teams independently develop deployment checklists. Nobody knows what already exists because there is no central catalogue.
Per-Department Cost Attribution
Spend caps in Claude Enterprise are per-user. There is no native mechanism for attributing costs to departments, projects, cost centres, or business units. The compliance API data can be used to build this reporting, but the attribution logic, the mapping of users to cost centres, and the aggregation must be built externally.
For organisations where AI spend needs to appear on departmental budgets, this means building and maintaining a cost attribution pipeline. It works. It just requires engineering effort that might be better spent elsewhere.
What Self-Hosted Governance Adds
Self-hosted governance is not a replacement for Claude Enterprise. It is a different architectural approach that solves a different set of problems. Using systemprompt.io as a concrete example, here is what a self-hosted governance platform adds beyond what Claude Enterprise provides.
Provider-Agnostic Governance
A self-hosted platform sits between your developers and all AI providers. Claude, OpenAI, Gemini, Mistral, local models running on your own GPU clusters. Every request, regardless of provider, passes through the same governance pipeline, is subject to the same policy rules, and generates the same audit events.
This eliminates the fragmentation problem. One set of policies. One audit trail. One SIEM integration. One cost attribution system. Whether the underlying model is Claude Opus, GPT-4, or a fine-tuned Llama running on your own hardware, the governance is identical.
Air-Gapped Deployment
A self-hosted platform runs on your infrastructure. Your servers. Your network. Your compliance boundary. For true air-gapped deployment, the platform operates with zero outbound connections. All governance evaluation happens locally. Audit logs are stored locally. SIEM events are emitted to local infrastructure.
When combined with local model inference, this creates a fully contained AI deployment where no data, no prompts, no model outputs, and no usage telemetry ever leave your physical infrastructure. For organisations in finance, healthcare, government, and defence where this is a regulatory requirement rather than a preference, self-hosted is not optional. It is the only viable architecture.
Synchronous Enforcement Pipeline
systemprompt.io evaluates every tool call through a 4-layer governance pipeline before execution. Not after. Not as a log entry that someone reviews later. Before the tool call runs.
The pipeline operates synchronously in the request path. A tool call enters the pipeline. Layer one evaluates organisational policy. Layer two evaluates team-level rules. Layer three runs content inspection including secret detection. Layer four evaluates custom rules defined by the organisation. Only if all four layers pass does the tool call execute.
This is the architectural difference between governance-as-configuration and governance-as-enforcement. Configuration says what is allowed. Enforcement ensures what actually happens complies with what is allowed, on every single operation.
SIEM-Native Event Emission
systemprompt.io emits structured JSON events through three output paths: stdout for container log aggregation, file output for traditional log management, and webhook output for direct SIEM ingestion. Events follow a consistent schema with trace IDs, timestamps, actor identification, action classification, and outcome recording.
Your security operations team adds the governance platform to their existing log pipeline the same way they add any other infrastructure component. No custom integration. No polling. No transformation layer. The events arrive in the format their tools already expect because the platform was designed to emit in those formats from the start.
Secret Detection
Every tool call output passes through a secret detection layer that pattern-matches against 35+ known secret formats. AWS access keys. Database connection strings. OAuth tokens. API keys for major cloud providers. Private keys in PEM format. The detection is deterministic. It does not depend on the AI model's judgement about whether something looks like a secret.
When a secret is detected, the tool call is blocked before execution. The event is logged with the detection pattern, the tool call context, and the actor identification. The developer is informed that their operation was blocked and why. Server-side credential injection provides a secure alternative: credentials are injected into tool call contexts at execution time without ever appearing in prompts or model outputs.
Full Audit Trail
systemprompt.io records 16 event hooks across 5 trace points for every AI interaction. Session creation. Prompt submission. Tool call request. Tool call execution. Response delivery. Each trace point captures the full context: who, what, when, which policy rules were evaluated, and what the outcome was.
This is not a usage log. It is a forensic audit trail that can reconstruct exactly what happened during any AI interaction, which policies were in effect, which rules were triggered, and whether the operation was permitted or blocked. For compliance teams that need to demonstrate governance during an audit, this is the difference between "we have usage data" and "we can show you exactly what controls were in effect and what they did."
Skill Marketplace
A centralised marketplace allows organisations to publish, discover, and install governance configurations, prompt templates, and workflow skills. Teams share what they build. New teams discover what already exists. Governance configurations that one team develops can be rolled out organisation-wide through the marketplace rather than being manually copied between projects.
Cost Attribution
Every AI request is tagged with model, provider, agent, team, department, and project identifiers. Cost attribution happens automatically at the request level, not as a post-hoc calculation from usage data. Finance teams get per-department, per-model, per-project cost breakdowns without building and maintaining a custom attribution pipeline.
Head-to-Head Comparison
| Capability | Claude Enterprise | Self-Hosted (systemprompt.io) |
|---|---|---|
| Provider support | Claude only | Claude, OpenAI, Gemini, Mistral, local models |
| Deployment model | SaaS (+ Bedrock/Vertex/Azure) | On-premise, air-gapped, any cloud |
| Tool call governance | Config-level managed settings | Synchronous 4-layer enforcement pipeline |
| RBAC | Role-based permissions via SSO | Role-based + team + department + project scoping |
| SIEM integration | Compliance API (custom integration required) | Native structured JSON, 3 output paths |
| Secret detection | Model behavioural guardrails | Deterministic 35+ pattern matching, pre-execution |
| Audit trail | Compliance API usage data | 16 event hooks, 5-point trace, forensic-grade |
| Skill management | None (internal team tooling) | Centralised marketplace with publishing workflow |
| Cost tracking | Per-user spend caps | Per-model, per-agent, per-department, per-project |
| Compliance frameworks | SOC 2 Type II, HIPAA-ready | Inherits your infrastructure compliance + built-in audit |
| Air-gapped capable | No | Yes, zero outbound connections |
| Data residency | Anthropic SaaS or your cloud account | Your infrastructure, your jurisdiction |
| Extension model | Managed settings configuration | Compile-time Rust extensions, custom governance rules |
| Pricing | Per-seat enterprise pricing | Self-hosted license, scales with infrastructure |
When Claude Enterprise Is Enough
I said at the start that I would be fair, and this is where that commitment matters most. Claude Enterprise governance is sufficient when:
Your organisation only uses Claude. If every AI workload in your organisation runs through Claude, then single-provider governance is not a limitation. It is a feature. The governance is deeply integrated with the product rather than sitting as a generic layer on top.
You do not need air-gapped deployment. If your compliance requirements are satisfied by data staying within your cloud account (via Bedrock or Vertex), SaaS deployment is fine. Most organisations, even in regulated industries, can meet their compliance requirements with cloud-hosted solutions that provide adequate data residency guarantees.
Managed settings meet your governance needs. If your governance requirement is "control which tools developers can use and how much they can spend," config-level governance delivers this. Not every organisation needs synchronous pre-execution enforcement on every tool call. Many need exactly what managed settings provide: centrally defined, consistently applied configuration policies.
You do not need native SIEM integration. If your security team is comfortable building a custom integration from the compliance API to your SIEM platform, or if AI governance events do not need to flow into your SIEM at all, the compliance API is sufficient.
Your compliance requirements do not mandate on-premise AI governance. This is the key question. If your auditors, regulators, or internal compliance team accept that AI governance can be provided by a SaaS vendor with appropriate certifications (SOC 2, HIPAA BAA, DPA), then Claude Enterprise meets the requirement. Not every regulated industry requires on-premise governance infrastructure.
If all of these conditions are true, Claude Enterprise is the right choice. It is simpler to operate than a self-hosted platform. It requires no infrastructure management. It is maintained by Anthropic. And it is deeply integrated with the Claude product in ways that a third-party platform cannot replicate.
When You Need Self-Hosted Governance
The calculus changes when any of the following conditions are true:
You use multiple AI providers. The moment your organisation runs workloads on both Claude and any other provider, you face a choice: maintain separate governance for each provider, or deploy a provider-agnostic layer that covers all of them. Separate governance means separate policies, separate audit trails, separate SIEM integrations, and separate cost attribution. Provider-agnostic governance eliminates this fragmentation.
Compliance requires on-premise or air-gapped deployment. Financial institutions operating under MAS TRM, DORA, or similar frameworks often require that governance infrastructure for critical systems runs on-premise. Healthcare organisations under conservative interpretations of HIPAA may require that AI governance, not just the AI itself, operates within their physical compliance boundary. Government agencies and defence contractors frequently require air-gapped deployment with zero outbound connections. For these organisations, SaaS governance is not an option regardless of the vendor's certifications.
You need real-time SIEM integration with structured events. If your security operations centre requires that every AI governance event flows into Splunk, ELK, or Datadog in real time, in a structured format, without custom integration code, the compliance API approach will not satisfy the requirement. Your SOC team will push back on maintaining a custom polling and transformation pipeline for a single data source when every other infrastructure component emits events natively.
You need synchronous pre-execution enforcement. If your risk framework requires that every AI tool call is evaluated against governance rules before execution, not after, config-level governance does not meet the requirement. The distinction between "this user is allowed to use this tool" and "this specific invocation of this tool with these parameters passes all governance checks" is the distinction between access control and execution governance.
You need deterministic secret detection. If your security policy requires that secrets are caught by pattern matching rather than model behaviour, you need a governance layer that inspects tool call content deterministically. Model behavioural guardrails are good but they are probabilistic. A regex that matches an AWS access key pattern either matches or it does not.
You want to own and extend the governance platform. If your organisation wants to define custom governance rules in code, not just configuration, a compile-time extension model allows you to add rules that are specific to your industry, your compliance framework, or your internal policies. This is the difference between configuring someone else's governance and building governance that is genuinely yours.
Can You Use Both?
Yes. This is not a mutually exclusive choice.
Claude Enterprise governs the Claude-specific experience. Managed settings control tool permissions within Claude Code. Spend caps manage per-user costs. SSO and SCIM handle identity lifecycle. The compliance API provides Claude-specific usage data. These capabilities are deeply integrated with the Claude product and work best when used as Anthropic designed them.
systemprompt.io adds the governance layer that operates across providers. The synchronous enforcement pipeline, the SIEM integration, the secret detection, the cross-provider audit trail, and the cost attribution all operate at a layer above any individual AI provider.
In a combined deployment, Claude Enterprise handles what it handles best: Claude-native governance. systemprompt.io handles what it handles best: provider-agnostic enforcement, audit, and compliance. A developer's Claude Code session is governed by Claude Enterprise managed settings and by the systemprompt.io enforcement pipeline. The two layers are complementary, not competing.
This is the deployment pattern we see most often in organisations that have already adopted Claude Enterprise and then discover they need broader governance. They do not rip out Claude Enterprise. They add a governance layer on top that extends coverage to their full AI portfolio.
How systemprompt.io Addresses This
If the self-hosted governance capabilities described in this guide are relevant to your organisation, here is where to go deeper:
Governance Pipeline — The 4-layer synchronous enforcement architecture, with details on policy evaluation, content inspection, and custom rule extension.
Compliance — Audit trail structure, SIEM integration paths, and compliance framework alignment for SOC 2, HIPAA, GDPR, and financial services regulations.
Unified Control Plane — Provider-agnostic management of Claude, OpenAI, Gemini, and local models through a single interface.
Self-Hosted AI Platform — Air-gapped deployment architecture, infrastructure requirements, and deployment patterns for on-premise environments.
Dashboard — Cost attribution, usage analytics, and per-department reporting across all AI providers.
Quick Start — Deploy systemprompt.io on your infrastructure and evaluate the governance pipeline against your compliance requirements.
The honest conclusion is straightforward. Claude Enterprise is good governance for Claude. If that is all you need, use it and do not add complexity. If you need governance that spans providers, runs on your infrastructure, enforces in real time, and emits events your SIEM can ingest natively, that is a different problem with a different solution. Know which problem you have, and choose accordingly.