Multitasking Score — Delegation & Parallelism
Understanding the Multitasking Score metric in the systemprompt.io Control Center
On this page
The Multitasking Score is a composite metric (0–100) measuring how effectively you delegate work and run AI tasks in parallel. It captures whether you're using Claude Code as a single worker or as a coordinated team.
Definition
The score combines two signals — subagent delegation and session concurrency — normalised by the number of sessions.
Formula:
Multitasking = min(100, (subagent_spawns x 2 + peak_concurrency x 3) / session_count x 10)
Components
Subagent spawns: When Claude creates helper agents to handle subtasks within a session. This happens when Claude determines that a complex task can be broken into independent pieces and delegates them to sub-agents. Each SubagentStart event increments this count.
Peak concurrency: The maximum number of sessions running simultaneously at any point during the day. This measures your parallelism at the session level — how many independent Claude Code instances you had working at once.
Session count: The total number of sessions for the day. This serves as a normalising factor — more sessions provide more opportunities for concurrency and delegation, so the raw numbers are scaled accordingly.
Data source
The Multitasking Score is deterministic — every component comes from actual recorded events:
subagent_spawns: Counted fromSubagentStarthook events, stored in thesubagent_spawnsfield onplugin_session_summaries- Peak concurrency: Computed from timestamp overlaps between sessions (see Concurrency for details on the sweep-line algorithm)
- Session count: Total sessions recorded for the day
Interpretation
| Score | Classification | What it means |
|---|---|---|
| 0–20 | Sequential single-task | Working one task at a time, no delegation. This is the default mode for most users starting out. |
| 20–50 | Moderate delegation | Some parallelism or subagent usage. You're beginning to leverage Claude's ability to work on multiple fronts. |
| 50–80 | Heavy parallelism | Significant concurrent work and delegation. You're treating Claude as a team, not a single assistant. |
| 80–100 | Maximum parallelism | Extensive delegation and concurrency. You're running multiple AI workers and using sub-agents for complex task decomposition. |
Why it matters
The Multitasking Score reveals whether you're using AI assistance at its full potential. A single Claude Code session working on one task is valuable, but it's only one worker. Running parallel sessions and using subagent delegation turns that single worker into a coordinated team.
Subagent delegation is particularly powerful because it happens automatically within a session. When Claude encounters a task that can be decomposed — like "update all test files to use the new API" — it can spawn sub-agents to handle each file independently. This is parallel execution within a single session, and it dramatically speeds up complex operations.
Session concurrency captures your own parallelism. Running a refactoring session alongside a test-writing session alongside a documentation session means three independent workstreams progressing simultaneously.
Together, these signals measure your total parallel capacity — both the parallelism you initiate (multiple sessions) and the parallelism Claude initiates (sub-agents).
How to increase your score
- Use multiple terminal windows: Start separate Claude Code sessions for independent tasks
- Break large tasks into parallel streams: Instead of one session doing everything sequentially, split work across sessions
- Prompt for delegation: When giving complex tasks, encourage Claude to use sub-agents for independent subtasks
- Identify independent work: Tasks that don't share files or state are good candidates for parallel sessions
The StarCraft analogy
The Multitasking Score maps to army splitting and macro management — the ability to coordinate multiple actions simultaneously across the map. A player who can only control one army group will always be outmanoeuvred by one who splits forces, manages multiple bases, and coordinates attacks on different fronts. In AI-assisted development, running parallel sessions and delegating to sub-agents is the equivalent of multi-front coordination: more work completed, more problems solved, more ground covered per hour.