Every AI action, traced and costed.
When a user claims an AI agent cost them ten thousand dollars, or an auditor asks where customer data went, you need a single source of truth from login through model output. systemprompt.io binds identity, tool calls, model spend, and lifecycle events to one TraceId, queryable from the database, the CLI, or an SSE stream.
Five-Point Audit Trail
An auditor asks what an agent did for a specific user last Tuesday. In a typical Claude deployment that question becomes a search across application logs, an LLM provider dashboard, and an MCP server's stdout. systemprompt.io collapses it into a single TraceId lookup. TraceQueryService::get_all_trace_data takes one trace id and resolves the full request lineage in a single call.
The method runs eight queries in parallel via tokio::try_join!: log events, AI request events, MCP execution events, execution step events, AI request summary, MCP execution summary, execution step summary, and the linked task id. Nothing is sampled. Every row is fetched, then assembled into one ordered timeline.
The LogEntry struct that backs every row carries twelve typed fields, including six identity and correlation columns: UserId, SessionId, TaskId, TraceId, ContextId, and ClientId. Identity is bound at construction time through builder methods like with_user_id and with_trace_id, so a row that reaches the database without a trace is a programming error, not a configuration choice. TraceListFilter then exposes the same data to operators, with filters for agent, status, tool, MCP presence, and time range.
- Identity Binding — LogEntry carries UserId, SessionId, TaskId, TraceId, ContextId, and ClientId as typed columns. Builder methods bind identity before the row is written, so an unattributed event cannot be persisted.
- Audit Lookup by Any Id — find_ai_request_for_audit resolves a request from a request id, task id, or trace id (including partial prefix matches). list_audit_messages and list_audit_tool_calls return the full conversation and every tool invocation.
- Cost Attribution Per Trace — AiRequestSummary aggregates total_cost_microdollars, total_tokens, request_count, and total_latency_ms per trace, so finance and engineering read the same numbers.
- log_entry.rs L7-L21 LogEntry struct: twelve typed fields, six of which are identity and correlation columns (UserId, SessionId, TaskId, TraceId, ContextId, ClientId).
- service.rs L61-L84 TraceQueryService::get_all_trace_data runs eight queries in parallel via tokio::try_join! and returns a single composite trace.
- audit_queries.rs find_ai_request_for_audit, list_audit_messages, list_audit_tool_calls, list_linked_mcp_calls
- models.rs TraceListFilter, AiRequestSummary, McpExecutionSummary, ExecutionStepSummary, AuditLookupResult
- request_queries.rs list_ai_requests, get_ai_request_stats, find_ai_request_detail
- tool_queries.rs list_tool_executions with ToolExecutionFilter (name, server, status, since)
Ten Lifecycle Event Hooks
An agent issues a tool call you did not anticipate. By the time it shows up in a log it has already run. The fix is to put a programmable checkpoint in front of the tool call, not behind it. The HookEvent enum names ten such checkpoints in ALL_VARIANTS: PreToolUse, PostToolUse, PostToolUseFailure, SessionStart, SessionEnd, UserPromptSubmit, Notification, Stop, SubagentStart, and SubagentStop.
Each event maps through HookEventsConfig to a vector of HookMatcher entries (glob default "*"). A matcher contains HookAction entries with three HookType variants: Command runs a shell process, Prompt sends an LLM evaluation, Agent delegates to another agent. A PreToolUse matcher whose action exits non-zero aborts the tool call before the model runtime sees the result.
SubagentStart and SubagentStop close the loop on delegation. When an agent spawns a child, the parent-child relationship is recorded against the same TraceId, so the audit-trails section's lookup walks the full chain back to the originating user prompt.
- Session Lifecycle — SessionStart and SessionEnd variants bracket every conversation. HookMatcher entries with glob patterns determine which sessions trigger which handlers. Know exactly when users are active.
- Tool Call Tracking — PreToolUse fires before execution, PostToolUse after success, PostToolUseFailure on errors. Three HookType variants (Command, Prompt, Agent) let you block, evaluate, or delegate at each point.
- Subagent Monitoring — SubagentStart and SubagentStop events track child agent delegation. DiskHookConfig supports file-based hook definitions with version, category (System/Custom), tags, and visible_to access control.
- hooks.rs HookEvent enum with 10 ALL_VARIANTS, HookEventsConfig, HookMatcher, HookAction, HookType
- service.rs TraceQueryService traces hook-triggered events via get_log_events and get_execution_step_events
- mcp_trace_queries.rs fetch_mcp_executions, fetch_mcp_linked_ai_requests, fetch_tool_logs, fetch_task_artifacts
- step_queries.rs Execution step queries tracking hook-triggered pipeline steps
- log_entry.rs LogEntry with TaskId and TraceId binding hook events to audit trail
- models.rs McpToolExecution with tool_name, server_name, input, output, status, execution_time_ms
SIEM-Ready Structured Output
SIEM teams do not want a log file to grep. They want typed JSON they can index. The ToSse trait serialises every event domain to SSE-compatible JSON with five implementations: AgUiEvent, A2AEvent, SystemEvent, ContextEvent, and AnalyticsEvent. Splunk, ELK, Datadog, and Sumo Logic ingest the stream as is. No regex extraction, no brittle log-format coupling.
Distribution runs through EventRouter on four static broadcasters: AGUI_BROADCASTER, A2A_BROADCASTER, CONTEXT_BROADCASTER, and ANALYTICS_BROADCASTER. GenericBroadcaster<E> holds one channel per user per connection, with automatic deregistration via a Drop-based ConnectionGuard. The keep-alive interval is fifteen seconds, set by the HEARTBEAT_INTERVAL constant.
For anomaly checks, AnomalyDetectionService ships with three default metrics: requests_per_minute (warning 15, critical 30), session_count_per_fingerprint (warning 5, critical 10), and error_rate (warning 10%, critical 25%). check_trend_anomaly flags any value that exceeds 2x (warning) or 3x (critical) the rolling average inside a configurable window. The same hooks are how a security team catches an unauthorised account hammering the inference API at three in the morning.
- Structured JSON Events — Five ToSse implementations (AgUiEvent, A2AEvent, SystemEvent, ContextEvent, AnalyticsEvent) serialise typed events to SSE-ready JSON. SIEMs ingest the stream without custom parsers.
- SSE Streaming with 15s Keep-Alive — EventRouter fans out to four static broadcasters. GenericBroadcaster holds per-user connections, ConnectionGuard drops them on disconnect, and HEARTBEAT_INTERVAL keeps idle clients alive at fifteen seconds.
- Anomaly Thresholds — AnomalyDetectionService monitors requests_per_minute, session_count_per_fingerprint, and error_rate against typed warning and critical thresholds. check_trend_anomaly flags 2x and 3x spikes against the rolling average.
- sse.rs ToSse trait with 5 implementations: AgUiEvent, A2AEvent, SystemEvent, ContextEvent, AnalyticsEvent
- broadcaster.rs L12-L18 HEARTBEAT_JSON and HEARTBEAT_INTERVAL = Duration::from_secs(15) wired into GenericBroadcaster's keep-alive.
- routing.rs EventRouter with 4 static broadcasters: AGUI, A2A, CONTEXT, ANALYTICS
- anomaly_detection.rs L56-L75 AnomalyDetectionService default metrics: requests_per_minute (15/30), session_count_per_fingerprint (5/10), error_rate (10%/25%).
- log_search_queries.rs L29-L116 search_logs and search_tool_executions back ILIKE pattern queries against the logs and mcp_tool_executions tables.
Cost Attribution & Analytics
An exec asks why this month's Claude spend doubled. Without typed attribution, the answer is a finance ticket and a week of CSV merging. CostAnalyticsRepository answers it in five SQL methods that all read from the same ai_requests table.
get_summary returns total request count, total cost in microdollars, and total tokens for a window. get_breakdown_by_model groups by model. get_breakdown_by_provider groups by provider. get_breakdown_by_agent joins ai_requests against agent_tasks on task_id, so every dollar resolves to a named agent. get_costs_for_trends returns the raw time series for charting. Costs are stored as integer microdollars to avoid floating-point drift, and converted to dollars at presentation time.
For the wider analytics dashboard, CoreStatsRepository::get_platform_overview returns eight platform metrics in a single query, and get_cost_overview computes 24h, 7d, and 30d rolling windows alongside total_cost and avg_cost_per_request. ToolAnalyticsRepository::list_tools ranks tools by execution_count, success_rate, avg_time, and last_used, so the same data answers a reliability question as well as a spend question.
- Model-Level Breakdown — CostAnalyticsRepository.get_breakdown_by_model returns per-model name, cost in microdollars, request count, and token totals. get_breakdown_by_provider adds provider-level grouping.
- Department Attribution — get_breakdown_by_agent joins ai_requests with agent_tasks for agent-level cost attribution. get_cost_overview provides rolling 24h, 7d, and 30d cost windows with avg_cost_per_request.
- Skill Popularity Rankings — ToolAnalyticsRepository.list_tools ranks by execution_count, success_rate, or avg_time. AgentAnalyticsRepository.get_stats returns completed/failed tasks and avg_execution_time_ms.
- costs.rs L20-L173 CostAnalyticsRepository methods: get_summary (L20-L41), get_breakdown_by_model (L63-L90), get_breakdown_by_provider (L92-L119), get_breakdown_by_agent (L121-L149), get_costs_for_trends (L151-L173).
- overview.rs get_platform_overview (8 metrics), get_cost_overview (5 rolling windows), get_user_metrics_with_trends
- leaderboards.rs get_top_users, get_top_agents, get_top_tools with session_count, task_count, total_cost
- list_queries.rs ToolAnalyticsRepository.list_tools with sort by execution_count, success_rate, avg_time
- stats_queries.rs AgentAnalyticsRepository: get_stats, get_ai_stats, get_tasks_for_trends
- policies.rs RetentionConfig with tiered policies: debug 1d, info 7d, warn 30d, error 90d
Admin Dashboard & Activity Tracking
An operator opening the dashboard at the start of a shift wants three things: who is using AI, how the system is behaving, and what changed since yesterday. CoreStatsRepository::get_platform_overview returns eight metrics for the first question in one query: total_users, active_users_24h, active_users_7d, total_sessions, active_sessions, total_contexts, total_tasks, and total_ai_requests. get_activity_trend answers the second with a daily time series across five dimensions (sessions, contexts, tasks, ai_requests, tool_executions) over a configurable window.
For user-side behaviour, AnalyticsEventType defines six named variants (PageView, PageExit, LinkClick, Scroll, Engagement, Conversion) plus a Custom(String) escape hatch for in-house event types. Each variant maps through the category() method to navigation, interaction, engagement, or conversion. EngagementEvent persists eighteen behavioural columns alongside identity and timestamps, including time_on_page_ms, max_scroll_depth, scroll_velocity_avg, focus_time_ms, is_rage_click, is_dead_click, and reading_pattern.
Updates land on the dashboard over SSE through ANALYTICS_BROADCASTER, with idle clients held open by the same fifteen-second heartbeat documented in the SIEM section. Leaderboards back the third question: get_top_users, get_top_agents, and get_top_tools rank by ai_request_count, task_count, and execution_count respectively, and get_user_metrics_with_trends computes current versus previous period counts across 24h, 7d, and 30d windows for trend arrows.
- Stats Ribbon & Usage Charts — get_platform_overview returns eight metrics in one query. get_activity_trend produces a daily time series across five dimensions (sessions, contexts, tasks, ai_requests, tool_executions).
- Typed Engagement Events — AnalyticsEventType has six named variants plus Custom(String). EngagementEvent persists eighteen behavioural columns including scroll_velocity_avg, is_rage_click, is_dead_click, and reading_pattern.
- Leaderboards & Trends — get_top_users, get_top_agents, and get_top_tools rank usage. get_user_metrics_with_trends computes current versus previous counts across 24h, 7d, and 30d windows.
- events.rs L4-L39 AnalyticsEventType: six named variants (PageView, PageExit, LinkClick, Scroll, Engagement, Conversion) plus Custom(String), with category() mapping.
- engagement.rs L6-L34 EngagementEvent struct: eighteen behavioural columns alongside identity (id, user_id, session_id, content_id) and timestamps.
- overview.rs get_platform_overview (8 metrics), get_cost_overview, get_user_metrics_with_trends
- activity.rs get_activity_trend (5 daily dimensions), get_recent_conversations, get_content_stats
- leaderboards.rs get_top_users, get_top_agents, get_top_tools with success_rate and avg_duration_ms
- breakdowns.rs get_browser_breakdown, get_device_breakdown, get_geographic_breakdown, get_bot_traffic_stats
- routing.rs ANALYTICS_BROADCASTER fans dashboard updates over SSE with the 15s heartbeat keep-alive.
CLI Analytics & Export
The 3 a.m. incident question is rarely "what does the dashboard show," it is "give me the raw rows for trace abc123 right now." A platform team needs a CLI that hits the same database the dashboard uses, and pipes results into grep, jq, and CSV without a browser.
AnalyticsCommands defines nine subcommands: Overview, Conversations, Agents, Tools, Requests, Sessions, Content, Traffic, and Costs. Each subcommand offers both remote execution and a local execute_with_db path against a DatabaseContext, so the same binary works against staging, production, or a local Postgres dump. TraceQueryService exposes the underlying queries: list_traces with TraceListFilter, list_tool_executions with ToolExecutionFilter, search_logs with pattern, level, and since filters, list_ai_requests by model and provider, and the log summary methods count_logs_by_level, top_modules, log_time_range, and total_log_count.
For audit work, find_ai_request_for_audit resolves a request from a request_id, task_id, or trace_id, including partial prefix matches. list_audit_messages returns the full conversation, list_audit_tool_calls returns every tool invocation, and list_linked_mcp_calls joins tool calls to MCP executions. RetentionConfig handles cleanup with tiered policies (debug 1d, info 7d, warn 30d, error 90d) on a configurable cron with vacuum.
- Nine Analytics Subcommands — AnalyticsCommands: Overview, Conversations, Agents, Tools, Requests, Sessions, Content, Traffic, Costs. Each runs over the network or directly against a local DatabaseContext via execute_with_db.
- Search and Export — search_logs and search_tool_executions run ILIKE pattern queries with optional level and since filters. list_traces and list_tool_executions take typed filters for agent, status, tool, server, and time.
- Audit Lookup From the Shell — find_ai_request_for_audit resolves request_id, task_id, or trace_id (prefix matches included). list_audit_messages, list_audit_tool_calls, and list_linked_mcp_calls return the full conversation, tool calls, and linked MCP executions.
- commands/analytics/ AnalyticsCommands with 9 subcommands: Overview, Conversations, Agents, Tools, Requests, Sessions, Content, Traffic, Costs
- service.rs L19-L196 TraceQueryService methods used from the CLI: list_traces, list_tool_executions, search_logs (L97-L105), search_tool_executions (L107-L114), list_ai_requests, find_ai_request_for_audit, list_audit_messages, list_audit_tool_calls, list_linked_mcp_calls.
- audit_queries.rs Audit resolution: find by request_id, task_id, or trace_id with partial prefix matching
- log_summary_queries.rs count_logs_by_level, top_modules, log_time_range, total_log_count
- tool_queries.rs list_tool_executions with ToolExecutionFilter (name, server, status, since)
- policies.rs RetentionConfig: debug 1d, info 7d, warn 30d, error 90d, configurable cron and vacuum
Founder-led. Self-service first.
No sales team. No demo theatre. The template is free to evaluate — if it solves your problem, we talk.
Who we are
One founder, one binary, full IP ownership. Every line of Rust, every governance rule, every MCP integration — written in-house. Two years of building AI governance infrastructure from first principles. No venture capital dictating roadmap. No advisory board approving features.
How to engage
Evaluate
Clone the template from GitHub. Run it locally with Docker or compile from source. Full governance pipeline.
Talk
Once you have seen the governance pipeline running, book a meeting to discuss your specific requirements — technical implementation, enterprise licensing, or custom integrations.
Deploy
The binary and extension code run on your infrastructure. Perpetual licence, source-available under BSL-1.1, with support and update agreements tailored to your compliance requirements.