Skip to main content

Agent Services

Configure and orchestrate AI agents with A2A protocol support, skills-based capabilities, multi-agent workflows, and OAuth security. Agents are the AI workers that perform tasks in systemprompt.io.

TL;DR: Agents are AI workers defined in YAML that follow the A2A (Agent-to-Agent) protocol. Each agent has a protocol card for discovery, skills that define its capabilities, MCP servers for tool access, and an OAuth security configuration. Agents can delegate work to other agents, enabling multi-agent workflows where a consultant agent designs a plan and a builder agent executes it.

The Problem

AI applications need a consistent way to define, discover, and secure AI workers. Without structure, each deployment ends up with ad-hoc agent configuration that cannot be shared, discovered by other systems, or audited for security. As applications grow, you also need agents to collaborate -- one agent gathers requirements, another executes, and a third broadcasts results.

The agent service solves this with a standardized configuration format based on the A2A protocol. Every agent publishes a card that describes what it can do, how to talk to it, and what credentials are required. Skills define capabilities. OAuth scopes control access. And the messaging system lets agents delegate tasks to each other without custom integration code.

How Agents Work

Agents are defined as YAML files in services/agents/. When the application starts, it loads these files through the config aggregation pattern in services/config/config.yaml and makes each agent available at its configured endpoint. An agent's behavior comes from three sources:

  1. System prompt -- shapes personality, rules, and response style
  2. Skills -- define what kinds of requests the agent handles
  3. AI provider -- the LLM that powers reasoning (configurable per agent)

When a user or another agent sends a message, the flow is:

  1. The message arrives at the agent's API endpoint
  2. The agent loads conversation context
  3. The message is sent to the AI provider along with the system prompt
  4. If the AI needs external tools, it invokes them through connected MCP servers
  5. The response is returned to the caller

Agents can also send messages to other agents using the CLI, enabling delegation and multi-agent coordination.

Agent Configuration

Each agent is a YAML file under services/agents/. The filename identifies the agent. All agent files are included in services/config/config.yaml for aggregation at startup.

Minimal Example

# services/agents/welcome.yaml
agents:
  welcome:
    name: "welcome"
    port: 9000
    endpoint: "http://localhost:8080/api/v1/agents/welcome"
    enabled: true
    is_primary: true
    default: true
    card:
      # A2A protocol card (see below)
    metadata:
      systemPrompt: |
        You are a helpful AI assistant.

Configuration Fields

Field Type Description
name string Unique identifier for the agent
port integer Port number the agent listens on
endpoint string Full URL where the agent is reachable
enabled boolean Whether the agent is active
is_primary boolean Marks the primary (default) agent
default boolean Used as the fallback when no agent is specified
dev_only boolean Only loaded in development mode
mcp_servers list MCP servers available to the agent at the top level
card object A2A protocol card (see next section)
metadata object System prompt, MCP servers, provider, and model
oauth object OAuth requirements and audience

Registering Agents

Every agent YAML file must be included in the config aggregation file:

# services/config/config.yaml
includes:
  - ../agents/welcome.yaml
  - ../agents/systemprompt_hub.yaml
  - ../agents/marketplace_editor.yaml
  - ../agents/marketplace_consultant.yaml

A2A Protocol Card

The card section defines the agent's A2A protocol card. This card makes agents discoverable and interoperable -- other systems and agents can query it to understand what the agent does, what transport it uses, and how to authenticate.

card:
  protocolVersion: "0.3.0"
  name: "Welcome"
  displayName: "Welcome"
  description: "A helpful AI assistant for your project"
  version: "1.0.0"
  preferredTransport: "JSONRPC"

  provider:
    organization: "systemprompt.io"
    url: "https://systemprompt.io"

  iconUrl: "https://ui-avatars.com/api/?name=W&background=0d9488&color=fff"
  documentationUrl: "https://yourproject.com/docs"

  capabilities:
    streaming: true
    pushNotifications: false
    stateTransitionHistory: false

  defaultInputModes:
    - "text/plain"
  defaultOutputModes:
    - "text/plain"
    - "application/json"

  supportsAuthenticatedExtendedCard: false

Key Card Fields

  • protocolVersion -- the A2A protocol version (currently 0.3.0)
  • preferredTransport -- communication method (JSONRPC)
  • capabilities.streaming -- whether the agent supports streamed responses
  • defaultInputModes / defaultOutputModes -- MIME types the agent accepts and returns
  • provider -- organization and URL for attribution
  • supportsAuthenticatedExtendedCard -- whether the card exposes additional detail to authenticated callers

You can retrieve running agent cards through the gateway:

systemprompt admin agents registry

Skills Configuration

Agents declare skills in two places. Skills listed inside the card are embedded in the A2A protocol card for external discovery. Skills referenced by id in the agent's metadata are loaded from the skills service at runtime.

Inline Skills (A2A Card)

card:
  skills:
    - id: "general_assistance"
      name: "General Assistance"
      description: "Help with questions, explanations, and general tasks"
      tags: ["assistance", "general", "help"]
      examples:
        - "Help me understand this concept"
        - "What are the best practices for..."
    - id: "content_writing"
      name: "Content Writing"
      description: "Help with writing, editing, and improving text content"
      tags: ["writing", "editing", "content"]
      examples:
        - "Help me write a blog post"

Tags and examples serve two purposes: they help the AI understand which kinds of requests the agent handles, and they make the agent discoverable by other systems querying the A2A card.

Shared Skills

Skill ids must match skills defined in services/skills/<skill_id>/config.yaml. A single skill can be shared across multiple agents. See the Skills Service documentation for details on creating skills.

MCP Server Access

Agents connect to MCP servers for tool access. Declare servers at either the top level or inside metadata:

# Top-level declaration
mcp_servers:
  - marketplace

# Or inside metadata
metadata:
  mcpServers:
    - marketplace

When the agent's AI provider needs to call an external tool (for example, creating a skill or sending a Discord message), it invokes that tool through the connected MCP server. Use the CLI to inspect which tools an agent can access:

systemprompt admin agents tools <agent-name>

AI Provider Configuration

Each agent can specify which AI provider and model to use inside metadata. If omitted, the agent uses the default provider from the AI service configuration.

metadata:
  provider: gemini
  model: gemini-2.5-flash
  toolModelOverrides: {}

The toolModelOverrides field allows you to route specific tool calls to a different model, useful when certain tools benefit from a more capable (or cheaper) model than the agent's default.

Multi-Agent Workflows

systemprompt.io supports multi-agent workflows where agents delegate tasks to each other. This is done through the A2A messaging system using the CLI.

Delegation Pattern

The most common pattern is a consultant-builder pair. One agent interviews the user and designs a plan. It then delegates execution to a second agent:

# In the consultant agent's system prompt:
metadata:
  systemPrompt: |
    For creating artifacts, delegate to the Marketplace Builder agent:
    admin agents message marketplace_editor -m "{specification}" --blocking --timeout 120

The --blocking flag makes the caller wait for the delegated agent to finish. The --timeout flag sets a maximum wait time in seconds.

Hub Pattern

A hub agent acts as a central coordinator, receiving status updates from other agents and broadcasting notifications:

# services/agents/systemprompt_hub.yaml
agents:
  systemprompt_hub:
    name: systemprompt_hub
    port: 9020
    card:
      description: "Central communications hub for Discord notifications,
                     memory management, and cross-agent coordination"
    metadata:
      systemPrompt: |
        You are the central nervous system for the multi-agent mesh. You:
        1. Receive workflow status updates from other agents
        2. Send Discord notifications for important events
        3. Store decisions and outcomes in memory
        4. Coordinate cross-agent communications

Workflow Lifecycle

A typical multi-agent workflow follows this lifecycle:

  1. User request -- the user sends a message to a primary agent (for example, the consultant)
  2. Discovery interview -- the primary agent gathers requirements through conversation
  3. Specification -- the primary agent produces a structured spec
  4. Delegation -- the primary agent sends the spec to a builder agent via admin agents message
  5. Execution -- the builder agent creates artifacts using its MCP tools
  6. Verification -- the builder agent confirms success or reports failure
  7. Notification -- a hub agent broadcasts the result (for example, to Discord)

Example: Marketplace Consultant and Builder

The marketplace_consultant agent conducts a Socratic interview to understand what the user wants to build. Once the user confirms a plan, the consultant delegates to marketplace_editor:

# The consultant agent runs this internally:
systemprompt admin agents message marketplace_editor \
  -m "CREATE SKILL: id=proposal_writing, name=Proposal Writing, ..." \
  --blocking --timeout 120

The builder agent receives the specification, executes CLI commands through its MCP server, verifies the result, and returns a success or failure message.

Security and OAuth Scoping

OAuth protects agent endpoints from unauthorized access. Security configuration lives inside the A2A card.

Security Schemes

card:
  securitySchemes:
    oauth2:
      type: oauth2
      flows:
        authorizationCode:
          authorizationUrl: "/api/v1/core/oauth/authorize"
          tokenUrl: "/api/v1/core/oauth/token"
          scopes:
            anonymous: "Public access"
            user: "Authenticated user access"
            admin: "Administrative access"
      description: "OAuth 2.0 authentication"

  security:
    - oauth2: ["anonymous"]

Scope Levels

Scope Use case
anonymous Public agents accessible without login
user Agents that require an authenticated user
admin Restricted agents for administrative operations

Agent-Level OAuth

For agents that participate in A2A workflows (agent-to-agent calls), configure OAuth at the agent level rather than the card level:

oauth:
  required: true
  scopes: ["user"]
  audience: "a2a"

Setting audience: "a2a" indicates that the agent expects tokens issued for agent-to-agent communication. Setting required: false allows unauthenticated A2A calls, which is appropriate for internal hub agents that only receive messages from trusted peers.

System Prompt

The system prompt defines the agent's personality, rules, and behavior:

metadata:
  systemPrompt: |
    You are a helpful AI assistant.

    ## Core Principles
    1. Be Helpful: Provide accurate information
    2. Be Clear: Use plain language
    3. Be Honest: Acknowledge limitations

    ## Capabilities
    You can help with:
    - Answering questions on a wide range of topics
    - Writing and editing text
    - Problem-solving and brainstorming

Write system prompts that clearly define the agent's role, boundaries, and how it should handle edge cases. For agents that delegate to others, include the exact command format in the system prompt so the AI knows how to invoke it.

Service Relationships

Agents connect to several other services:

  • Config service -- aggregates agent YAML files at startup through the includes pattern
  • Skills service -- provides reusable capabilities that agents reference by id
  • AI service -- supplies the LLM provider that powers agent reasoning
  • MCP servers -- provide external tools agents can invoke during conversations
  • Scheduler service -- can trigger agent tasks on a schedule
  • Other agents -- agents delegate to each other through A2A messaging

Managing Agents

Use the CLI to manage agents:

# List all agents
systemprompt admin agents list

# Show agent details
systemprompt admin agents show welcome

# Validate agent configuration files
systemprompt admin agents validate

# Sync agent configuration to the database
systemprompt cloud sync local agents --direction to-db -y

# Send a message to an agent
systemprompt admin agents message welcome -m "Hello"

# Get agent cards from the running gateway
systemprompt admin agents registry

CLI Reference

Command Description
admin agents list List configured agents
admin agents show <name> Display agent configuration
admin agents validate Check agent configs for errors
admin agents create Create a new agent
admin agents edit <name> Edit agent configuration
admin agents delete <name> Delete an agent
admin agents status Show agent process status
admin agents logs View agent logs
admin agents registry Get running agents from gateway (A2A discovery)
admin agents message <name> Send message to agent via A2A protocol
admin agents task <id> Get task details and response from an agent
admin agents tools <name> List MCP tools available to an agent
admin agents run <name> Run agent server directly (bypasses orchestration)

All commands are prefixed with systemprompt. Run systemprompt admin agents <command> --help for detailed options.

Troubleshooting

Agent not responding -- Check that enabled: true is set and the endpoint URL is correct. Verify the AI provider is configured and has valid credentials. Use systemprompt admin agents status to confirm the agent process is running.

Unauthorized errors -- The caller's token does not have the required scopes. Check the security section in the agent card and the oauth section at the agent level. For A2A calls, confirm the calling agent has the correct audience.

Skills not loading -- Verify that skill ids in the agent card match skills defined in services/skills/. Sync both agents and skills to the database:

systemprompt core skills sync --direction to-db -y
systemprompt cloud sync local agents --direction to-db -y

MCP tools not available -- Confirm the MCP server is listed in mcp_servers or metadata.mcpServers. Check that the MCP server is running with systemprompt plugins mcp logs <server-name>.

Delegation failing -- When an agent delegates to another agent using admin agents message, check that the target agent is enabled and its OAuth configuration allows the call. Use --blocking --timeout to avoid silent timeouts. Inspect the target agent's logs for errors.

Agent not discoverable -- Run systemprompt admin agents registry to see which agents the gateway knows about. If an agent is missing, verify it is included in services/config/config.yaml and that the config has been synced.