GOVERNED AGENTIC MESH. ONE FRONT DOOR FOR EVERY AGENT.
Every agent-to-agent call passes the same JWT audience gate and audit log as a tool call, so a stolen token cannot silently fan out across the rest.
Single Mesh Registry
When a security lead asks which agents can call which MCP servers today, an ad-hoc mesh cannot answer. Configs scatter across repos, YAML files, and deployment scripts. The agent registry loads every enabled agent from the services config at startup and serves them over two stable routes. GET /api/v1/agents/registry returns agents. GET /api/v1/mcp/registry returns MCP servers. One query, one list, auditable in seconds.
Each agent in the registry exposes an AgentCard. An AgentCard is the A2A (agent-to-agent) protocol's machine-readable profile. It lists capabilities, declared security schemes, transport protocols, and the OAuth scopes the agent will demand before it answers. The registry also advertises the well-known discovery URL /.well-known/agent-cards, so an outside A2A client discovers the mesh the same way an internal one does. There is no second, undocumented discovery surface.
For an engineer auditing the surface, the agent list, the MCP server list, and the AgentCard contents all resolve to the same config the binary booted from. An agent that is not in the registry is not reachable over A2A. The mesh boundary is the same as the config boundary.
- One registry query, not a Slack thread — The agents registry endpoint returns every enabled agent with capabilities, security schemes, and MCP tool metadata. Without this, agent inventory lives in a deployment engineer's head.
- MCP servers on the same shelf — The MCP registry endpoint lists every server the mesh can reach, with endpoints and OAuth requirements. An agent cannot call a server the security team never approved.
- A2A discovery at the well-known URL — The agent-cards well-known URL follows the A2A protocol specification, so external clients discover your mesh through the public standard with no bespoke integration.
- registry/mod.rs (AgentRegistry) AgentRegistry loads services config and exposes get_agent and list_enabled_agents
- api_paths.rs (registry routes) AGENTS_REGISTRY, MCP_REGISTRY, and WELLKNOWN_AGENT_CARDS constants
- a2a/mod.rs (AgentCard types) Re-export surface for AgentCard, AgentCapabilities, SecurityScheme
The Audience Gate
Once agents can call each other, a stolen token or a misconfigured client becomes a way to skip the governance pipeline your tool calls already run through. A2A traffic on systemprompt.io uses JSON-RPC 2.0, a small schema-validated request and response envelope. Typed methods cover message send, streaming message, task query, and task cancel. Every request hits the same JWT validation path before any agent logic runs.
The governance bind lives in the A2A auth module. It calls the same JWT provider that every other request uses, decodes the claims, and then checks that the token carries the a2a audience. A token issued for browser login or MCP tool calls is rejected here. A session cookie lifted from a dashboard cannot be replayed as an agent-to-agent call. The check lives next to the handler, not in a separate proxy that an attacker could route around.
For the team building multi-agent workflows, messaging, streaming, task status, and cancellation are defined by a published protocol and backed by typed JSON-RPC errors (parse error, invalid request, method not found, invalid params, internal error). No homegrown coordination primitive to maintain. Every transit is logged against a verified identity. Staff engineers can verify the audience check in source at the line reference below.
- JSON-RPC 2.0 on a governed surface — Standard method set (message/send, message/stream, tasks/get, tasks/cancel) defined in the A2A models. External A2A clients interoperate without translation, using the envelope your security team already reviewed.
- Audience-checked agent calls — The validator rejects any JWT without the a2a audience. A browser session token cannot impersonate an agent or fan out across the mesh silently.
- Typed error surface — The JSON-RPC error type carries parse error, invalid request, method not found, invalid params, and internal error constructors. Ambiguous 200 OK bodies cannot swallow failures.
- a2a/jsonrpc.rs (JSON-RPC 2.0 types) Request, JsonRpcResponse, JsonRpcError with standard JSON-RPC 2.0 error codes
- a2a_server/auth/validation.rs (validate_agent_token) JWT validation with a2a audience check and user-active verification
- a2a_server/server.rs (A2A server wiring) Wires the JWT provider, audience list, and OAuth middleware onto the A2A router
User-Scoped Bearer
A compromised service account that can silently talk to every agent is the kind of finding that ends a SOC 2 renewal. On the mesh, each A2A token is a JWT scoped to a single user with the a2a audience, issued with a one-hour expiry. The short lifetime is a deliberate trade-off, long enough for a multi-step workflow to complete, short enough that a leaked token is a minor incident rather than a standing backdoor.
At request time the validator pulls the JWT provider from the shared OAuth state, verifies the signature, decodes claims, confirms the a2a audience, and then (when a user provider is configured) checks that the user still exists and is active. A deactivated employee's agent token stops working on the next call, with no cache to invalidate. The claims expose has_permission, has_role, and has_audience helpers, so handlers make authorization decisions from the token itself rather than a round-trip to the database.
For a CISO asking whether user isolation can be proven in an audit, every authenticated A2A call carries the username and user_type in the traced claims. The validation module emits a structured tracing event on every successful authentication. The same JWT secret minimum (32 characters, enforced at startup) that protects browser sessions protects mesh traffic.
- A2A audience on every token — Tokens are minted with an a2a audience and checked on every request. Without this split, a browser or MCP token would unlock agent-to-agent calls and widen the blast radius of any theft.
- One-hour expiry, deliberate — Agent tokens expire after one hour, chosen to cover long-running multi-agent tasks without giving a leaked token a multi-day life. Refresh is a single call, so pipelines keep running.
- Deactivated users blocked on next call — When a user provider is wired in, the validator checks user-active status per request. A deleted account cannot keep riding an unexpired token to reach other agents.
- a2a_server/auth/validation.rs (generate_agent_token, validate_agent_token) Token issuance with a2a audience and per-request validation
- a2a_server/server.rs (JWT wiring, lines 58-72) AgentOAuthState built with the shared JWT provider, issuer, and audience
- claims.rs (audience and permission helpers) JwtClaims with has_audience, has_permission, has_role used by A2A handlers
Typed Tool Conduit
Giving an agent raw database access is how audit findings happen. Giving it a thin REST API is how integration work never ends. The mesh lets agents reach domain state through registered MCP tools. Every call carries a declared name, a JSON schema for inputs, and a permission check before execution. The tool layer is where the governance pipeline already lives. Agent access and human-driven tool use share one policy surface rather than two.
When an agent invokes a tool, the A2A handler resolves the call through the same tool executor the MCP surface uses. That means input validation against the tool's declared schema, per-user permission enforcement, and an audit row written before the response returns. The behaviour that proves this lives in the agent domain's MCP integration module, named in the reference below.
For a CTO weighing build-vs-buy, building this in-house means writing a per-tool permission layer, a typed request and response envelope, an audit writer, and a discovery surface. The mesh ships all four in one binary. Staff engineers verify the tool executor path in source, CISOs read the audit rows, and product teams wire agents without re-implementing any of it.
- Schema-checked tool calls — Every agent tool call is validated against the tool's declared input schema before it runs. A mistyped request fails fast with a typed JSON-RPC error, not a 500 stack trace.
- Same audit trail as human tool use — Agent-initiated tool calls flow through the MCP tool executor, which writes the same audit row (tool name, server name, input, execution id) as any other tool call.
- No parallel REST API to maintain — Agents access domain state through registered tools, not bespoke endpoints. The REST surface you would have built and supported is absent by design.
- services/mcp/mod.rs (agent-side MCP integration) Agent MCP integration: tool execution, artifact handling, result transformation
- a2a_server/processing/strategies/tool_executor.rs Tool execution strategy invoked from A2A message handlers
- execution_tracking.rs (audit rows) Execution tracking that writes the audit trail for agent-driven calls
One Front Door
External MCP clients that connect over side channels are the classic governance hole. They work, they skip the audit log, and nobody knows about them until a post-incident review. On the mesh, external clients (Claude Desktop, ChatGPT, any MCP-compatible client) authenticate through the same JWT path that internal agents use. The registry that lists your internal agents is the registry they discover against. There is one front door, and it is audited.
Transport is streamable HTTP, the transport modern MCP clients speak natively. Point the client at the mesh, complete the OAuth flow, and its calls start landing in the same JWT audience check and tool executor path described above. An external client cannot do anything an internal agent with the same claims could not also do. The policy you wrote for internal traffic covers external traffic for free.
For a CISO, every external connection is a user-scoped token with the a2a audience, traced through the same validation emit and logged through the same audit writer. For a CTO, the internal and external surfaces share one codebase. A governance change to one applies to the other without parallel work.
- Claude Desktop, ChatGPT, any MCP client — External clients authenticate through the same OAuth flow and land on the same JWT audience check. A tool call from Claude Desktop writes the same audit row as an internal agent.
- Streamable HTTP, not a private channel — The mesh speaks streamable HTTP, the transport modern MCP clients use. A partner integration does not require a custom bridge, so there is no hidden surface to govern.
- One policy covers both sides — Internal agent calls and external client calls share the JWT provider, the audience gate, the tool executor, and the audit writer. Security reviews write one rule and get both populations.
Founder-led. Self-service first.
No sales team. No demo theatre. The template is free to evaluate — if it solves your problem, we talk.
Who we are
One founder, one binary, full IP ownership. Every line of Rust, every governance rule, every MCP integration — written in-house. Two years of building AI governance infrastructure from first principles. No venture capital dictating roadmap. No advisory board approving features.
How to engage
Evaluate
Clone the template from GitHub. Run it locally with Docker or compile from source. Full governance pipeline.
Talk
Once you have seen the governance pipeline running, book a meeting to discuss your specific requirements — technical implementation, enterprise licensing, or custom integrations.
Deploy
The binary and extension code run on your infrastructure. Perpetual licence, source-available under BSL-1.1, with support and update agreements tailored to your compliance requirements.