Skip to main content

ANY AGENT. ONE GOVERNANCE LAYER.

Claude, Codex, Gemini, or your own custom agents. systemprompt.io governs them all through the same pipeline, the same audit trail, and the same policies. The AiProvider trait with 19 methods ensures every provider gets identical governance enforcement.

One Governance Layer for Every Provider

systemprompt.io does not care which AI provider your agents use. The AiProvider trait defines 19 methods (generate(), generate_with_tools(), generate_stream(), generate_with_schema(), get_pricing(), and 14 more) that every provider implements identically. Claude Code, OpenAI Codex, Google Gemini, or a custom model running on your own hardware: every agent passes through the same governance pipeline.

ProviderFactory::create() instantiates the correct provider from profile configuration. ProviderFactory::create_all() initialises every enabled provider in a single call, returning a HashMap<String, Arc<dyn AiProvider>>. The same scope checks, secret scanning, rate limits, and audit trail apply regardless of provider. enforce_rbac_from_registry() validates JWT claims and checks OAuth2 scopes on every MCP request before it reaches any provider.

Cost attribution tracks spend across all providers through AiRequestRecord. Every request records provider, model, cost_microdollars, input_tokens, output_tokens, latency_ms, and trace_id. Your finance team sees a single dashboard with per-agent, per-provider, per-model breakdowns. No separate billing integrations. No manual reconciliation.

  • AiProvider Trait: 19 Methods — generate(), generate_with_tools(), generate_stream(), generate_with_schema(), get_pricing(), supports_model(), capabilities(), and 12 more. Every provider implements the same interface.
  • ProviderFactory Routing — ProviderFactory::create() instantiates Anthropic, OpenAI, or Gemini from AiProviderConfig. create_all() returns HashMap> for multi-provider setups.
  • Unified Request Tracking — AiRequestRecord captures provider, model, cost_microdollars, input_tokens, output_tokens, latency_ms, trace_id, and session_id for every request across all providers.

Extensible Provider Architecture

Adding a new AI provider is a compile-time operation. Implement the AiProvider trait's 19 methods, add your AiProviderConfig to your profile YAML, and ProviderFactory::create() routes to it automatically. No runtime configuration. No plugin compatibility issues. Type-safe integration with Arc<dyn AiProvider>.

Each AiProviderConfig supports enabled, api_key, endpoint (custom base URLs), default_model, google_search_enabled, and per-model ModelDefinition with capabilities, limits, and pricing. Profile-based configuration means different environments use different providers. Your development team tests against a local model. Staging validates against Claude. Production runs Gemini. The same governance policies enforce across all of them. The provider is a configuration detail, not an architectural decision.

MCP tool calls are governed identically regardless of which agent initiated them. enforce_rbac_from_registry() validates JWT claims, extracts AuthenticatedUser with permissions and roles, and checks OAuth2 scopes against the server's OAuthRequirement. An MCP server registered in your governance registry applies the same access controls whether the request comes from Claude Code, a Codex-powered agent, or a custom internal tool.

  • Compile-Time Provider Integration — Implement the AiProvider trait (19 methods including generate, generate_with_tools, get_pricing). Add AiProviderConfig to profile YAML. Governance applies automatically.
  • Per-Environment Provider Config — AiProviderConfig supports enabled, api_key, endpoint, default_model, google_search_enabled, and per-model ModelDefinition with capabilities, limits, and pricing.
  • Provider-Agnostic MCP Governance — enforce_rbac_from_registry() validates JWT, extracts AuthenticatedUser, checks OAuth2 scopes. Identical enforcement on every tool call from every agent.

Built for Custom Agents

Most organisations will build custom agents tailored to their specific workflows. systemprompt.io is designed for this. Your custom agent inherits the same governance, audit trail, and cost tracking as any commercial AI agent, with no additional integration work. AgentRegistry::get_agent() retrieves configuration, list_enabled_agents() discovers all active agents, and the orchestration layer manages lifecycle, health monitoring, and port isolation.

The A2A (Agent-to-Agent) protocol means your custom agents can discover and coordinate with each other through a governed registry. AgentCard exposes capabilities, skills, and authentication requirements via a standard JSON endpoint. handle_agent_card() serves discovery data. Route tasks between agents, share context, and compose multi-agent workflows, all within your governance boundaries.

The Cowork plugin provides the fastest path for developers already using Claude Desktop. One-click install brings governed skills directly into Claude. But for teams using other tools (Cursor, Windsurf, VS Code with Copilot, or custom CLI agents), the same governance API is available to any HTTP client. enforce_rbac_from_registry() governs every request identically regardless of the client.

  • Custom Agents, Same Governance — AgentRegistry manages agent discovery and configuration. Your proprietary agents inherit the full governance pipeline including RBAC, rate limiting, and audit logging.
  • Agent-to-Agent Discovery — AgentCard exposes capabilities, skills, and security schemes. handle_agent_card() serves the discovery endpoint. Agents find each other through the governed registry.
  • Any Developer Tool — Claude Desktop via Cowork, Cursor, Windsurf, VS Code, or custom CLI tools. enforce_rbac_from_registry() governs every request regardless of the client.

A2A Coordination: Multi-Provider Agent Workflows

The A2A (Agent-to-Agent) protocol enables multi-provider agent workflows within your governance boundary. Each agent publishes an AgentCard containing AgentCapabilities, AgentSkill definitions, SecurityScheme requirements, and TransportProtocol preferences. Other agents discover these cards through the AgentRegistry and coordinate via governed JSON-RPC messaging.

Registry Discovery. AgentRegistry::list_enabled_agents() returns all active agents. to_agent_card() builds the discovery payload with capabilities, skills, and OAuth2 security configuration. AgentInterface and AgentExtension describe supported protocols and custom metadata. Discovery is automatic. Agents register at startup and deregister on shutdown via the orchestrator's lifecycle manager.

Governed Delegation. When Agent A delegates a task to Agent B, the request passes through enforce_rbac_from_registry() at the receiving server. JWT claims, OAuth2 scopes, and audience validation apply identically to agent-to-agent requests. A2aRequest wraps MessageSendParams and TaskIdParams for task lifecycle management. TaskState tracks execution status (submitted, working, input-required, completed, failed, canceled) across agent boundaries.

  • Registry Discovery — AgentRegistry::list_enabled_agents() discovers all active agents. to_agent_card() exposes AgentCapabilities, AgentSkill, SecurityScheme, and TransportProtocol.
  • Governed Delegation — Agent-to-agent requests pass through enforce_rbac_from_registry(). JWT validation, scope checking, and audience verification apply identically to inter-agent communication.
  • Multi-Agent Task Lifecycle — A2aRequest with MessageSendParams and TaskIdParams. TaskState tracks six states: submitted, working, input-required, completed, failed, canceled across agent boundaries.

Cross-Provider Cost Tracking

Every AI request across every provider is tracked with microdollar precision. AiRequestRecord captures provider, model, cost_microdollars (i64), input_tokens, output_tokens, latency_ms, user_id, session_id, task_id, and trace_id. ModelPricing::get_pricing() returns per-model input and output costs. No estimation. Actual token counts come from provider responses.

Model-Level Breakdown. CostAnalyticsRepository::get_breakdown_by_model() aggregates cost, request count, and token usage per model. get_breakdown_by_provider() gives the same view grouped by provider. get_breakdown_by_agent() joins ai_requests with agent_tasks to attribute costs to specific agents. Three dimensions (model, provider, agent) in a single PostgreSQL-backed analytics layer.

Real-Time Trend Analysis. get_costs_for_trends() returns timestamped cost and token data for time-series visualisation. get_summary() provides total requests, total cost, and total tokens for any time range. get_previous_cost() enables period-over-period comparison. All queries run against the ai_requests table. No external analytics service required. Your finance team gets cross-provider cost visibility without additional vendor contracts.

  • Microdollar Cost Attribution — AiRequestRecord stores cost_microdollars (i64) per request. ModelPricing provides input_cost_per_1k and output_cost_per_1k per model. Actual tokens, not estimates.
  • Three-Dimensional Breakdown — get_breakdown_by_model(), get_breakdown_by_provider(), get_breakdown_by_agent(). Cost, request count, and token usage across all three dimensions.
  • Real-Time Trend Analysis — get_costs_for_trends() for time-series data. get_summary() for totals. get_previous_cost() for period-over-period comparison. PostgreSQL-native, no external service.

Founder-led. Self-service first.

No sales team. No demo theatre. The template is free to evaluate — if it solves your problem, we talk.

Who we are

One founder, one binary, full IP ownership. Every line of Rust, every governance rule, every MCP integration — written in-house. Two years of building AI governance infrastructure from first principles. No venture capital dictating roadmap. No advisory board approving features.

How to engage

Govern every agent from one place.

The AiProvider trait and ProviderFactory ensure the same governance pipeline works regardless of which AI provider your team uses. Add a provider in YAML, governance applies automatically.