EVERY TOOL CALL GOVERNED.
Built natively on the Model Context Protocol. Per-server OAuth2 scoping, agent-tool mapping enforced through the plugin manifest, and a central registry that validates every server before it starts.
Per-Server OAuth2 Scoping
Each MCP server is an independent OAuth2 resource server with its own audience, scopes, and signing material. The validate_oauth_configs function in the registry validator rejects any enabled server that requires OAuth but defines no scopes, catching misconfiguration at deploy time rather than at runtime. The enforce_rbac_from_registry function in the RBAC middleware loads the per-server OAuth configuration, validates JWT bearer tokens, checks the audience claim against the server's configured audience, and validates scopes against the caller's permission set before any request reaches the tool runtime.
Scope elevation is blocked by Permission::implies. The validate_scopes_for_permissions function compares each required scope against the caller's permissions using user_perm.implies(required), so a caller cannot satisfy a higher-privileged scope by holding a narrower one. Proxy-verified requests flow through try_proxy_verified_auth, which validates x-proxy-verified, x-user-id, and x-user-permissions headers before granting access.
Credential rotation happens per server. The validate_jwt_token function in auth.rs validates tokens against a configurable JWT secret, issuer, and audience list per server, so rotating the database server's client secret leaves the file server untouched. The AuthResult enum separates Anonymous and Authenticated paths cleanly. Unauthenticated requests to an OAuth-required server return a structured MCP error rather than being silently downgraded.
- Isolated Scopes — validate_oauth_configs rejects servers with OAuth enabled but no scopes defined. Each MCP server defines its own audience and scope requirements. No shared authorization surface.
- Independent Revocation — validate_audience checks JWT audience claims per server. Revoke access to one MCP server without affecting any other. Compromise containment is structural.
- Credential Rotation — validate_jwt_token validates against per-server JWT secret, issuer, and audience list. Rotate OAuth2 client secrets per server on independent schedules.
- rbac.rs enforce_rbac_from_registry, validate_scopes_for_permissions, AuthResult enum
- auth.rs validate_jwt_token with HS256, issuer, and audience validation
- validator.rs#L96-L114 validate_oauth_configs rejects enabled servers that require OAuth but define no scopes
- permission.rs#L93 Permission::implies, the scope hierarchy used by validate_scopes_for_permissions
- session_manager.rs DatabaseSessionManager with per-session persistence and activity tracking
Governed Tool Calls
Every tool call flows through a governed pipeline. The McpToolExecutor::execute method wraps every handler invocation. It serialises input arguments, generates a unique McpExecutionId, delegates to the McpToolHandler::handle trait method, records execution status (success or failure), and persists the full execution record via ToolUsageRepository. There is no second path. Tool calls that bypass the executor do not exist in the codebase.
The McpToolHandler trait enforces type safety at compile time. Input types must implement DeserializeOwned + JsonSchema. Output types must implement Serialize + JsonSchema + McpOutputSchema. The McpOutputSchema trait generates validated JSON schemas with x-artifact-type metadata for 11 artifact types including TextArtifact, TableArtifact, ChartArtifact, and DashboardArtifact. If the crate compiles, the tool contract is valid.
The DatabaseSessionManager implements the rmcp SessionManager trait with database-backed persistence. Sessions survive server restarts via persist_create and persist_close. The resume method detects sessions that exist in the database but not in memory (server restart scenario) and signals clients to reconnect cleanly. Activity tracking via update_activity enables session timeout enforcement. Every MCP operation flows through the same permission, audit, and session infrastructure.
- Type-Safe Tool Execution — McpToolExecutor::execute wraps every call with McpExecutionId tracking, ToolUsageRepository persistence, and McpArtifactRepository storage. Compile-time schema validation via McpToolHandler trait.
- Policy Enforcement — enforce_rbac_from_registry validates JWT claims, checks OAuth2 scopes via Permission::implies hierarchy, and enforces per-server audience requirements before any tool executes.
- Database-Backed Sessions — DatabaseSessionManager persists sessions to PostgreSQL. Sessions survive restarts. Resume detects stale sessions and signals reconnect. Activity tracking enables timeout enforcement.
- tool.rs#L18-L48 McpToolHandler trait with type-safe Input/Output and McpToolExecutor::execute
- schema.rs McpOutputSchema trait with 11 artifact types and validated_schema()
- rbac.rs RBAC enforcement with proxy-verified auth and scope validation
- session_manager.rs DatabaseSessionManager with persist_create, persist_close, resume
- hooks.rs PreToolUse, PostToolUse, PostToolUseFailure hook definitions
- mcp_tool_executions.sql Tool execution audit table schema
Central MCP Server Registry
All MCP servers are discoverable from a single registry. The RegistryService::get_enabled_servers_as_config method loads the global configuration, builds the ExtensionRegistry, filters for enabled servers (skipping dev_only servers in cloud mode), resolves crate paths for internal servers, and returns a complete McpServerConfig for each, including name, port, OAuth requirements, schema definitions, tool configuration, and environment variables. One call returns the full fleet inventory.
The validate_registry function runs four validation passes before any server starts: validate_port_conflicts detects duplicate port assignments across enabled servers using a HashSet. validate_server_configs checks that internal servers have valid ports (above 1024) and existing crate paths. validate_oauth_configs rejects OAuth-enabled servers with empty scope definitions. validate_server_types ensures internal servers have binaries and external servers have remote endpoints. All four pass or no servers start.
The RegistryManager implements three registry traits: McpRegistry for server discovery (list_servers, find_server, server_exists), McpToolProvider for tool enumeration (list_tools, load_tools_for_servers), and McpRegistryProvider for external consumers (get_server, list_enabled_servers with ServiceOAuthConfig). The registry supports both McpServerType::Internal (compiled binaries) and McpServerType::External (remote endpoints). Each internal server binds its own port. The default range is (5000, 5999), set by default_mcp_port_range in settings.rs.
- Single Discovery Point — RegistryService::get_enabled_servers_as_config returns every MCP server with name, port, OAuth config, schemas, tools, and environment variables. One API call for the full fleet.
- Four-Pass Validation — validate_port_conflicts, validate_server_configs, validate_oauth_configs, and validate_server_types all pass or no servers start. Invalid configuration is rejected at deploy time.
- Port Isolation — Default range (5000, 5999) from default_mcp_port_range in settings.rs, one port per server. validate_port_conflicts uses a HashSet to detect duplicate assignments across enabled servers.
- manager.rs#L13-L72 RegistryService::get_enabled_servers_as_config builds the full McpServerConfig list
- settings.rs#L59-L61 default_mcp_port_range returns (5000, 5999)
- validator.rs Four-pass validation: ports, configs, OAuth, server types
- trait_impl.rs McpRegistry, McpToolProvider, McpRegistryProvider trait implementations
- deployment.rs McpServerType (Internal/External), ToolUiConfig, deployment configuration
- port_manager.rs Port isolation, cleanup, and wait_for_port_release_with_retry
- proxy_health.rs ProxyHealthCheck::can_route_traffic and get_routable_services
Configuration Integrity
Agent-tool mappings are explicitly declared in plugin configuration. The validate_single_server function checks each server's structural validity: internal servers must have ports above 1024 and existing crate paths, display names and descriptions must be non-empty, external servers must have remote endpoints and no binary references. There are no implicit permissions. If a tool is not declared in the plugin manifest, it does not exist for that agent.
The validate_and_migrate_schemas function creates a SchemaValidator with configurable validation modes (strict, warn, skip) and runs validate_and_apply against every server's declared schemas. Tables are created if missing. Schema errors are collected and reported. If any server fails validation, the entire orchestration halts. The SchemaValidationReport tracks validated count, created count, and error messages for full observability.
Configuration drift is detectable and preventable. The validate_server_type_constraints function ensures internal servers have binaries and external servers have remote endpoints, preventing mismatched server type configurations. The McpDeploymentProviderImpl exposes the protocol version (2024-11-05) for version tracking. The ServiceStatus struct captures runtime state (health, PID, uptime, tools count, latency, auth requirement) for comparison against declared configuration.
- Explicit Declarations — validate_single_server checks port validity (>1024), crate path existence, display name, description, and server type constraints. Undeclared tools do not exist for the agent.
- Schema Validation — SchemaValidator with three modes (strict, warn, skip). validate_and_apply creates missing tables and reports errors. SchemaValidationReport tracks validated, created, and error counts.
- Drift Prevention — validate_server_type_constraints catches mismatched configurations. ServiceStatus tracks runtime state for comparison. Protocol version 2024-11-05 enables version-aware validation.
- validator.rs validate_single_server, validate_server_type_constraints
- schema_sync.rs validate_and_migrate_schemas, SchemaValidator, SchemaValidationReport
- plugin.rs#L49-L59 PluginVariableDef declares per-plugin scoped variables with name, secret flag, and required flag
- service_validation.rs validate_service with connection testing and status reporting
- status.rs ServiceStatus struct with health, PID, uptime, tools count, latency
- trait_impl.rs McpDeploymentProviderImpl with protocol_version 2024-11-05
MCP Lifecycle Management
The LifecycleManager coordinates startup, shutdown, health monitoring, and restart for every MCP server. The start_server function follows a strict sequence. It verifies binary existence via ProcessManager::verify_binary, prepares the port via NetworkManager::prepare_port, waits for port release using MAX_PORT_CLEANUP_ATTEMPTS = 5 with PORT_BACKOFF_BASE_MS = 200 (defined in port_manager.rs), spawns the server process via ProcessManager::spawn_server, then runs up to 15 health check attempts. The delay schedule is set in startup.rs: max_attempts = 15 and base_delay = 300ms, with calculate_delay returning Duration::ZERO on attempt 1 and base_delay * min(attempt, 5) thereafter, capping at 1500ms. Only after HealthStatus::Healthy is confirmed does the server register in the database with its PID and startup time.
Shutdown follows the inverse path. The stop_server function finds the running process (checking both database PID and port-based detection), sets status to "stopping", calls ProcessManager::terminate_gracefully (SIGTERM), waits 500ms, then ProcessManager::force_kill (SIGKILL) if the process survives. The finalize_shutdown function updates the database status to "stopped", clears the PID, and calls NetworkManager::cleanup_port_resources. The cleanup_stale_state function handles servers that are already stopped but have lingering database entries.
The reconcile function in the orchestrator runs on every startup. cleanup_stale_services and delete_crashed_services clear dead entries, delete_disabled_services removes servers no longer in configuration, validate_schemas ensures database tables exist, detect_and_handle_orphaned_processes kills processes with no matching registry entry, detect_and_handle_stale_binaries kills processes running outdated binaries, then kill_all_running_servers stops everything before start_pending_servers brings the fleet up fresh. The monitor_health_continuously function runs an interval-based health loop with HealthMonitorState tracking failure counts and downtime duration for automatic recovery detection.
- Controlled Startup — start_server runs binary verification, port preparation (MAX_PORT_CLEANUP_ATTEMPTS = 5, PORT_BACKOFF_BASE_MS = 200), process spawn, then up to 15 health attempts with base_delay 300ms scaled by min(attempt, 5). Database registration only after HealthStatus::Healthy.
- Graceful Shutdown — stop_server sends SIGTERM, waits 500ms, then SIGKILL if needed. finalize_shutdown updates database, clears PID, releases port resources. cleanup_stale_state handles already-stopped servers.
- Process Reconciliation — reconcile cleans stale entries, validates schemas, kills orphaned and stale-binary processes, then starts all servers fresh. monitor_health_continuously tracks failure counts and recovery.
- startup.rs#L53-L97 start_server health loop: max_attempts = 15, base_delay = 300ms, calculate_delay returns Duration::ZERO on attempt 1 then base_delay * min(attempt, 5)
- shutdown.rs stop_server with graceful SIGTERM/SIGKILL and finalize_shutdown
- health.rs (lifecycle) check_server_health with process detection and error marking
- restart.rs restart_server with verify_clean_state between stop and start
- reconciliation.rs reconcile with orphan detection, stale binary cleanup, and fleet restart
- health.rs (monitoring) HealthStatus enum, HealthCheckResult, monitor_health_continuously with HealthMonitorState
- port_manager.rs#L4-L6 MAX_PORT_CLEANUP_ATTEMPTS = 5, PORT_BACKOFF_BASE_MS = 200, POST_KILL_DELAY_MS = 500
- proxy_health.rs ProxyHealthCheck with can_route_traffic and get_routable_services
Founder-led. Self-service first.
No sales team. No demo theatre. The template is free to evaluate — if it solves your problem, we talk.
Who we are
One founder, one binary, full IP ownership. Every line of Rust, every governance rule, every MCP integration — written in-house. Two years of building AI governance infrastructure from first principles. No venture capital dictating roadmap. No advisory board approving features.
How to engage
Evaluate
Clone the template from GitHub. Run it locally with Docker or compile from source. Full governance pipeline.
Talk
Once you have seen the governance pipeline running, book a meeting to discuss your specific requirements — technical implementation, enterprise licensing, or custom integrations.
Deploy
The binary and extension code run on your infrastructure. Perpetual licence, source-available under BSL-1.1, with support and update agreements tailored to your compliance requirements.