Analytics Service
Automatic tracking of AI costs, usage metrics, session engagement, content performance, and audit trails. Every request logged with full observability.
On this page
TL;DR
systemprompt.io captures every AI request, session, page view, and tool execution automatically. Token counts, costs, latencies, engagement depth, bot classification, and content performance are all recorded without instrumentation code. Query everything through the CLI, the API, or let agents analyze their own performance via MCP.
What It Does and Why It Matters
The analytics service provides full observability into your AI operations and web traffic. It answers questions that matter in production: How much did that agent cost this week? Which content is actually being read? Are sessions real humans or bots? What is the error rate on tool calls?
Data capture happens at multiple points in the request lifecycle through instrumented middleware. No application code changes are required. The system records timing, token counts, costs, engagement signals, and metadata transparently as requests flow through the stack.
The "closed loop" principle means agents can query their own performance data through the same interfaces humans use. An agent can ask "what was my average response time today?" and act on the answer. This self-awareness enables adaptive behavior in production.
Request --> Middleware (start timer) --> LLM / Tool Call --> Middleware (capture metrics) --> Response
|
Analytics Store (PostgreSQL)
Data Capture
The analytics system automatically records data across five domains. No manual instrumentation is needed.
AI Requests
Every LLM call is logged with:
- Provider and model (Anthropic, OpenAI, Gemini)
- Token counts (input and output)
- Cost (calculated from model-specific pricing)
- Latency (end-to-end response time)
- Status (success, failure, error type)
- Trace ID (links related events across the request chain)
Session and Engagement
The user_sessions and engagement_events tables capture visitor behavior:
- Session classification -- known bots, scanners, behavioral bots, ghost sessions, clean humans
- Page views with time-on-page and scroll depth
- Click counts per page
- Landing page and navigation path
- Device type, browser, geographic region
- Traffic source and referrer
MCP Tool Executions
Every tool call through MCP is tracked in mcp_tool_executions:
- Tool name and server
- User who triggered the call
- Status (success or failed)
- Timestamp for activity correlation
User Activity
The user_activity table logs operational events:
- Logins and authentication events
- Marketplace edits (plugin, skill, agent changes)
- Category-based filtering for audit queries
Content Performance
The content_performance_metrics table aggregates engagement data per content item:
- Total views and unique visitors
- Average time on page (seconds)
- Views over 7-day and 30-day windows
- Trend direction (up, down, stable) -- computed by comparing recent week to prior average
This table is updated every 15 minutes by the content_analytics_aggregation background job.
Cost Tracking
Cost tracking is automatic for all supported providers. The system knows token pricing for each model and calculates costs in real-time.
Supported Providers
| Provider | Models | Cost Tracking |
|---|---|---|
| Anthropic | Claude Opus, Sonnet, Haiku | Full support |
| OpenAI | GPT-4, GPT-3.5 | Full support |
| Gemini | Gemini Pro, Flash | Full support |
Cost Calculation
Costs are calculated from token counts and model-specific pricing:
Cost = (input_tokens x input_price) + (output_tokens x output_price)
Pricing tables are updated regularly. For custom or fine-tuned models, configure custom pricing in your profile:
analytics:
costs:
currency: USD
custom_pricing:
my-custom-model:
input_per_1k: 0.01
output_per_1k: 0.03
Budget Alerts
Configure alerts when spending approaches thresholds:
analytics:
alerts:
- type: daily_cost
threshold: 100.00
action: email
- type: monthly_cost
threshold: 1000.00
action: pause_requests
Audit Trails
Every operation generates audit events. These create a complete record of who did what, when, and how.
Audit Event Types
| Category | Events |
|---|---|
| Authentication | login, logout, token_issued, token_revoked |
| Authorization | scope_granted, permission_denied |
| AI Operations | request_started, request_completed, request_failed |
| Data Access | file_accessed, content_created, content_deleted |
| Administration | user_created, config_changed, agent_modified |
| MCP Tools | tool_executed, tool_failed |
Trace Correlation
Every request receives a trace ID that links all related events. Follow a single user action through authentication, authorization, AI calls, tool executions, and responses:
# Follow a trace end-to-end
systemprompt infra logs audit <request-id> --full
Usage Metrics and Dashboards
Admin Dashboard
The admin dashboard at /admin shows real-time aggregate metrics:
- Events today and this week (from
user_activity) - Total edits and logins
- MCP tool calls and MCP errors
- Top users by activity (edits + MCP calls)
- Popular skills by execution count
- Hourly activity over the last 24 hours
- Usage time-series bucketed by hour (tool uses, prompts, active users, sessions, errors)
Traffic Reports
The daily_traffic_report background job generates comprehensive traffic reports and posts summaries to Discord. Reports include:
- Session totals with bot/human breakdown (24h)
- Multi-period session counts (12h, 24h, 3d, 7d)
- Engagement metrics -- tracked sessions, events, average time on page, scroll depth, clicks
- Top pages by engagement
- Page transitions (navigation flow)
- Reading patterns
- Geographic distribution across multiple periods
- Traffic sources and referrers
- Device breakdown
- Hourly trends
Full reports are saved as Markdown files in storage/files/reports/.
CLI Reference
The systemprompt analytics command provides access to all analytics data.
Top-Level Commands
| Command | Description |
|---|---|
systemprompt analytics overview |
Dashboard overview (supports --since, --until, --export) |
systemprompt analytics conversations |
Conversation analytics |
systemprompt analytics agents |
Agent performance analytics |
systemprompt analytics tools |
Tool usage analytics |
systemprompt analytics requests |
AI request analytics |
systemprompt analytics sessions |
Session analytics |
systemprompt analytics content |
Content performance analytics |
systemprompt analytics traffic |
Traffic analytics |
systemprompt analytics costs |
Cost analytics |
Subcommand Details
Costs:
systemprompt analytics costs summary # Cost summary
systemprompt analytics costs trends # Cost trends over time
systemprompt analytics costs breakdown # Cost breakdown by model/agent
Requests:
systemprompt analytics requests stats # Aggregate AI request statistics
systemprompt analytics requests list # List individual AI requests
systemprompt analytics requests trends # AI request trends over time
systemprompt analytics requests models # Model usage breakdown
Agents:
systemprompt analytics agents stats # Aggregate agent statistics
systemprompt analytics agents list # List agents with metrics
systemprompt analytics agents trends # Agent usage trends over time
systemprompt analytics agents show <name> # Deep dive into specific agent
Sessions:
systemprompt analytics sessions stats # Session statistics
systemprompt analytics sessions trends # Session trends over time
systemprompt analytics sessions live # Real-time active sessions
Traffic:
systemprompt analytics traffic sources # Traffic source breakdown
systemprompt analytics traffic geo # Geographic distribution
systemprompt analytics traffic devices # Device and browser breakdown
systemprompt analytics traffic bots # Bot traffic analysis
Content:
systemprompt analytics content stats # Content engagement statistics
systemprompt analytics content top # Top performing content
systemprompt analytics content trends # Content trends over time
Use systemprompt analytics <command> --help for detailed options on any subcommand.
Performance Monitoring
Background Jobs
Two scheduled jobs keep analytics data fresh:
| Job | Schedule | Purpose |
|---|---|---|
content_analytics_aggregation |
Every 15 minutes | Aggregates engagement events into content_performance_metrics |
daily_traffic_report |
Daily | Generates full traffic report, saves to file, sends Discord summary |
Both jobs are registered in services/scheduler/config.yaml and run automatically.
Agent Self-Analysis via MCP
Agents connected via MCP can query their own analytics. Available MCP analytics tools:
analytics_costs-- query cost data for the current agentanalytics_requests-- query request metricsanalytics_errors-- query error patternsanalytics_audit-- query audit events
An agent noticing high latency might switch to a faster model. An agent seeing repeated errors can adjust its approach. This feedback loop is what makes the "closed loop" architecture practical.
Configuration
Analytics configuration lives in the profile settings:
analytics:
enabled: true
retention_days: 90
sampling_rate: 1.0 # 1.0 = 100% of requests tracked
costs:
currency: USD
custom_pricing:
my-custom-model:
input_per_1k: 0.01
output_per_1k: 0.03
alerts:
- type: daily_cost
threshold: 100.00
action: email
- type: monthly_cost
threshold: 1000.00
action: pause_requests
| Setting | Description | Default |
|---|---|---|
analytics.enabled |
Turn analytics on or off | true |
analytics.retention_days |
How long to keep data | 90 |
analytics.sampling_rate |
Percentage of requests to track (0.0 to 1.0) | 1.0 |
analytics.costs.currency |
Currency for cost display | USD |
analytics.costs.custom_pricing |
Model-specific cost overrides | -- |
analytics.alerts |
Budget threshold notifications | -- |
Related Documentation
- AI Service -- provider configuration and request routing
- Agents Service -- agent configuration and lifecycle
- Scheduler Service -- background job scheduling
- MCP Service -- tool execution and agent connectivity