Prelude

If you write a lot of Rust and spend most of your day inside Claude Code, those two worlds may have stayed separate for a while. Rust for building services and libraries. Claude Code for talking to an AI that helps write them. Then the Model Context Protocol changed everything.

MCP lets you give Claude Code new abilities by writing servers that expose tools, resources, and prompts over a standardised protocol. You can write these servers in TypeScript, Python, Go, or any language that can handle JSON-RPC over stdio. But if you already know Rust, there is a compelling reason to reach for it here. MCP servers are long-running processes that sit between your AI assistant and your system. They handle file I/O, network calls, and potentially sensitive operations. Rust gives you memory safety, predictable performance, and a single static binary with no runtime dependencies. No node_modules folder. No Python virtual environment. Just a binary you can drop into any machine and run.

The rmcp crate is the official Rust SDK for the Model Context Protocol. It has crossed 4.7 million downloads on crates.io as of early 2026 and provides a macro-driven API that makes building MCP servers feel natural in Rust. This guide walks through building a complete MCP server from scratch, one that actually does something useful, and connecting it to Claude Code.

Why Build a Custom MCP Server

Every MCP tutorial out there builds a weather API client. That is fine for learning the shape of the protocol, but it does not reflect why most developers actually want custom MCP servers. The real use case is giving Claude Code deeper access to your specific workflow. Things it cannot do out of the box.

Here is a common scenario. When working on a large codebase, developers often need quick statistics. How many lines of Rust are in this project? Which files are the largest? What is the language breakdown? Shell one-liners can answer these questions, but having Claude Code access these capabilities natively means it can use them while reasoning about your code.

So we are going to build code-stats, an MCP server that provides three tools. The first counts lines in files matching a given extension. The second finds the largest files in a directory. The third gives a full language breakdown of a project. By the end, Claude Code will be able to call these tools whenever it needs to understand the shape of a codebase.

The Journey

What MCP Actually Is

Before writing code, it helps to understand what we are building on top of. The Model Context Protocol is a JSON-RPC 2.0 based protocol that defines how AI applications (called clients) communicate with external capability providers (called servers). If you have worked with the Language Server Protocol that powers editor features like autocomplete, MCP follows a similar architecture.

An MCP server can expose three types of capabilities. Tools are functions the AI can call, like "count lines in these files". Resources are data the AI can read, like a configuration file or database record. Prompts are reusable templates the AI can use. For this guide, we are focusing on tools because they are the most immediately useful.

The communication happens over a transport layer. The two main options are stdio and Streamable HTTP. Stdio is the simplest. The client spawns your server as a child process and sends JSON-RPC messages over stdin. Your server responds on stdout. This is exactly how Claude Code integrates with local MCP servers, and it is what we will use.

One critical detail about stdio transport. Never use println!() in an MCP server that communicates over stdio. Your stdout is the protocol channel. If you print debug messages to stdout, you will corrupt the JSON-RPC stream and the client will disconnect. Use eprintln!() or, better yet, a proper logging framework directed at stderr.

Setting Up the Project

Let's start with a fresh Rust project.

cargo new code-stats
cd code-stats

Open Cargo.toml and add the dependencies we need.

[package]
name = "code-stats"
version = "0.1.0"
edition = "2021"

[dependencies]
rmcp = { version = "0.16", features = ["server", "transport-io", "macros"] }
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
schemars = "0.8"
anyhow = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }

Here is why each dependency is here. The rmcp crate is the MCP SDK itself. The server feature enables server-side functionality. The transport-io feature gives us stdio transport. The macros feature enables the derive macros that make tool definitions clean. We use tokio because rmcp is async. serde and serde_json handle serialisation. schemars generates JSON Schema definitions from Rust types, which is how MCP clients discover what parameters your tools accept. anyhow gives us ergonomic error handling. And tracing with tracing-subscriber provides structured logging that writes to stderr, keeping stdout clean for the protocol.

Defining Your First Tool

Now let's build the server. Open src/main.rs and start with the imports and our server struct.

use anyhow::Result;
use rmcp::handler::server::tool::ToolCallContext;
use rmcp::handler::server::wrapper::Json;
use rmcp::model::{ServerCapabilities, ServerInfo, Tool};
use rmcp::schemars;
use rmcp::serde;
use rmcp::{tool, ServerHandler};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::fs;
use std::path::{Path, PathBuf};
use tracing::info;

#[derive(Debug, Clone)]
pub struct CodeStatsServer;

Our server struct is deliberately simple. It holds no state because each tool call is self-contained. It receives a directory path, analyses it, and returns results. If your MCP server needed to maintain connections or caches, you would add fields here.

Now let's define the input type for our first tool. This is where schemars comes in. By deriving JsonSchema, we tell the MCP client exactly what parameters this tool accepts, including descriptions and types.

#[derive(Debug, Deserialize, JsonSchema)]
pub struct CountLinesInput {
    /// The directory path to search in
    pub path: String,
    /// File extension to filter by (e.g. "rs", "py", "js"). Do not include the dot.
    pub extension: String,
}

The doc comments on each field become the parameter descriptions that Claude Code sees. Good descriptions matter because they help the AI understand when and how to use your tool.

Now we implement the tool itself using rmcp's macro system. The #[tool] attribute on a method inside an impl block registers it as an MCP tool.

#[tool(tool_box)]
impl CodeStatsServer {
    #[tool(description = "Count total lines in files matching a given extension within a directory")]
    pub async fn count_lines(
        &self,
        #[tool(aggr)] input: Json<CountLinesInput>,
    ) -> Result<String, anyhow::Error> {
        let path = PathBuf::from(&input.path);
        if !path.exists() {
            return Ok(format!("Error: path '{}' does not exist", input.path));
        }
        if !path.is_dir() {
            return Ok(format!("Error: path '{}' is not a directory", input.path));
        }

        let mut total_lines: u64 = 0;
        let mut file_count: u64 = 0;
        let ext = &input.extension;

        count_lines_recursive(&path, ext, &mut total_lines, &mut file_count)?;

        Ok(format!(
            "Found {} .{} files containing {} total lines in '{}'",
            file_count, ext, total_lines, input.path
        ))
    }
}

A few things to notice here. The #[tool(tool_box)] attribute on the impl block tells rmcp this block contains tool definitions. The #[tool(description = "...")] attribute on the method defines what Claude Code sees when it lists available tools. The #[tool(aggr)] attribute on the input parameter means "aggregate all parameters into this struct", so the JSON Schema fields from CountLinesInput become the tool's parameters. The return type is Result<String, anyhow::Error>. The string content becomes the tool's response that Claude Code reads.

We also need the recursive helper function that does the actual file traversal.

fn count_lines_recursive(
    dir: &Path,
    extension: &str,
    total_lines: &mut u64,
    file_count: &mut u64,
) -> Result<()> {
    let entries = fs::read_dir(dir)?;
    for entry in entries {
        let entry = entry?;
        let path = entry.path();

        // Skip hidden directories and common non-source directories
        if path.is_dir() {
            let dir_name = path.file_name().unwrap_or_default().to_string_lossy();
            if dir_name.starts_with('.')
                || dir_name == "target"
                || dir_name == "node_modules"
                || dir_name == "vendor"
            {
                continue;
            }
            count_lines_recursive(&path, extension, total_lines, file_count)?;
        } else if path.extension().map_or(false, |e| e == extension) {
            match fs::read_to_string(&path) {
                Ok(content) => {
                    *total_lines += content.lines().count() as u64;
                    *file_count += 1;
                }
                Err(_) => {
                    // Skip binary or unreadable files silently
                }
            }
        }
    }
    Ok(())
}

Notice that we skip hidden directories, target, node_modules, and vendor. This is a practical design choice. Without it, scanning a Rust project would descend into the target directory and count thousands of generated files. Your MCP tools should encode this kind of domain knowledge. The AI does not need to think about which directories to skip, your tool handles it.

Implementing the Server Handler

rmcp requires you to implement the ServerHandler trait on your server struct. This trait defines the server's identity and capabilities. Here is the implementation.

#[tool(tool_box)]
impl ServerHandler for CodeStatsServer {
    fn name(&self) -> String {
        "code-stats".to_string()
    }

    fn instructions(&self) -> String {
        "A server that provides code statistics tools. Use count_lines to count lines of code \
         by file extension, find_largest_files to identify the biggest files, and \
         language_breakdown to get a summary of languages used in a project."
            .to_string()
    }
}

The name() method returns a human-readable identifier. The instructions() method returns a description that helps the AI understand what this server is for and when to use it. The #[tool(tool_box)] attribute on this impl block tells rmcp to automatically wire up the tools we defined in the earlier impl block.

The Main Function and Transport

The main function ties everything together. It sets up logging, creates the server, and starts listening on stdio.

#[tokio::main]
async fn main() -> Result<()> {
    // Configure logging to stderr (critical for stdio transport)
    tracing_subscriber::fmt()
        .with_env_filter(
            tracing_subscriber::EnvFilter::from_default_env()
                .add_directive("code_stats=info".parse()?)
        )
        .with_writer(std::io::stderr)
        .init();

    info!("Starting code-stats MCP server");

    let server = CodeStatsServer;

    let transport = rmcp::transport::io::stdio();

    let server_handle = server.serve(transport).await?;

    server_handle.waiting().await?;

    Ok(())
}

Here is what happens step by step. First, we configure tracing_subscriber to write to stderr. This is not optional for stdio MCP servers. If any log line reaches stdout, the protocol breaks. The from_default_env() call means you can control log verbosity with the RUST_LOG environment variable, which is useful for debugging.

Then we create our server struct and the stdio transport. The rmcp::transport::io::stdio() function creates a transport that reads from stdin and writes to stdout. We call server.serve(transport) which starts the JSON-RPC message loop. The waiting() call blocks until the client disconnects.

Building and Testing Locally

Let's make sure everything compiles.

cargo build --release

Your binary will be at target/release/code-stats. You can do a quick sanity check by sending a JSON-RPC initialise message manually, but the real test is connecting it to Claude Code.

Before that, let's verify the binary runs without crashing.

echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"test","version":"0.1.0"}}}' | ./target/release/code-stats

You should see a JSON response with the server's capabilities and tool list. If you see an error or nothing at all, check that your code compiles cleanly and that you are not accidentally writing to stdout anywhere.

Connecting to Claude Code

Now for the satisfying part. Let's connect our server to Claude Code. There are two ways to do this.

The first is the CLI command, which is good for quick testing.

claude mcp add --transport stdio code-stats -- /absolute/path/to/code-stats/target/release/code-stats

The second is a .mcp.json file in your project root, which is better for sharing with your team. Create .mcp.json with the following content.

{
  "mcpServers": {
    "code-stats": {
      "command": "/absolute/path/to/code-stats/target/release/code-stats"
    }
  }
}

If you are using the .mcp.json approach, the server will be available whenever you open Claude Code in that project directory. The CLI approach registers it in your local scope by default.

Verify the server is registered.

claude mcp list

You should see code-stats in the output. Now open Claude Code and try asking it something like "How many lines of Rust are in this project?" Claude Code will discover the count_lines tool from your server and call it automatically.

If you have worked with MCP servers in other languages before, you will appreciate not having to install any runtime. For a deeper look at how Claude Code manages MCP servers and what else you can do with them, our guide on Claude Code MCP servers and extensions covers the broader picture.

Adding a Second Tool

One tool is useful. Multiple tools that work together are powerful. Let's add two more tools to demonstrate how the pattern scales. First, a tool to find the largest files in a directory.

Add the input type.

#[derive(Debug, Deserialize, JsonSchema)]
pub struct FindLargestFilesInput {
    /// The directory path to search in
    pub path: String,
    /// Maximum number of files to return (defaults to 10)
    pub limit: Option<u32>,
}

The Option<u32> for limit means this parameter is optional in the MCP schema. Claude Code can call the tool with or without specifying a limit. Then add the tool method inside the existing #[tool(tool_box)] impl CodeStatsServer block.

    #[tool(description = "Find the largest files in a directory, sorted by size descending")]
    pub async fn find_largest_files(
        &self,
        #[tool(aggr)] input: Json<FindLargestFilesInput>,
    ) -> Result<String, anyhow::Error> {
        let path = PathBuf::from(&input.path);
        if !path.exists() {
            return Ok(format!("Error: path '{}' does not exist", input.path));
        }
        if !path.is_dir() {
            return Ok(format!("Error: path '{}' is not a directory", input.path));
        }

        let limit = input.limit.unwrap_or(10) as usize;
        let mut files: Vec<(PathBuf, u64)> = Vec::new();
        collect_file_sizes(&path, &mut files)?;

        files.sort_by(|a, b| b.1.cmp(&a.1));
        files.truncate(limit);

        let mut output = format!("Top {} largest files in '{}':\n\n", limit, input.path);
        for (file_path, size) in &files {
            let display_path = file_path
                .strip_prefix(&path)
                .unwrap_or(file_path)
                .display();
            output.push_str(&format_file_size(*size, &display_path.to_string()));
            output.push('\n');
        }

        Ok(output)
    }

And the helper functions.

fn collect_file_sizes(dir: &Path, files: &mut Vec<(PathBuf, u64)>) -> Result<()> {
    let entries = fs::read_dir(dir)?;
    for entry in entries {
        let entry = entry?;
        let path = entry.path();
        let dir_name = path.file_name().unwrap_or_default().to_string_lossy();
        if dir_name.starts_with('.') || dir_name == "target" || dir_name == "node_modules" {
            continue;
        }
        if path.is_dir() {
            collect_file_sizes(&path, files)?;
        } else {
            let metadata = fs::metadata(&path)?;
            files.push((path, metadata.len()));
        }
    }
    Ok(())
}

fn format_file_size(bytes: u64, path: &str) -> String {
    if bytes >= 1_048_576 {
        format!("  {:.1} MB  {}", bytes as f64 / 1_048_576.0, path)
    } else if bytes >= 1024 {
        format!("  {:.1} KB  {}", bytes as f64 / 1024.0, path)
    } else {
        format!("  {} B   {}", bytes, path)
    }
}

Now let's add the language breakdown tool. This one is the most interesting because it combines file counting with extension-to-language mapping.

#[derive(Debug, Deserialize, JsonSchema)]
pub struct LanguageBreakdownInput {
    /// The directory path to analyse
    pub path: String,
}

Add this tool method to the same #[tool(tool_box)] impl CodeStatsServer block.

    #[tool(description = "Get a breakdown of programming languages used in a project by file count and line count")]
    pub async fn language_breakdown(
        &self,
        #[tool(aggr)] input: Json<LanguageBreakdownInput>,
    ) -> Result<String, anyhow::Error> {
        let path = PathBuf::from(&input.path);
        if !path.exists() {
            return Ok(format!("Error: path '{}' does not exist", input.path));
        }
        if !path.is_dir() {
            return Ok(format!("Error: path '{}' is not a directory", input.path));
        }

        let mut stats: HashMap<String, LanguageStats> = HashMap::new();
        collect_language_stats(&path, &mut stats)?;

        let mut sorted: Vec<(String, LanguageStats)> = stats.into_iter().collect();
        sorted.sort_by(|a, b| b.1.lines.cmp(&a.1.lines));

        let total_files: u64 = sorted.iter().map(|(_, s)| s.files).sum();
        let total_lines: u64 = sorted.iter().map(|(_, s)| s.lines).sum();

        let mut output = format!(
            "Language breakdown for '{}':\n\n{:<20} {:>8} {:>12}\n{}\n",
            input.path,
            "Language",
            "Files",
            "Lines",
            "-".repeat(44)
        );

        for (language, language_stats) in &sorted {
            output.push_str(&format!(
                "{:<20} {:>8} {:>12}\n",
                language, language_stats.files, language_stats.lines
            ));
        }

        output.push_str(&format!(
            "{}\n{:<20} {:>8} {:>12}\n",
            "-".repeat(44),
            "Total",
            total_files,
            total_lines
        ));

        Ok(output)
    }

And the supporting types and functions.

#[derive(Debug, Default)]
struct LanguageStats {
    files: u64,
    lines: u64,
}

fn extension_to_language(ext: &str) -> Option<&str> {
    match ext {
        "rs" => Some("Rust"),
        "py" => Some("Python"),
        "js" => Some("JavaScript"),
        "ts" => Some("TypeScript"),
        "tsx" => Some("TSX"),
        "jsx" => Some("JSX"),
        "go" => Some("Go"),
        "java" => Some("Java"),
        "c" => Some("C"),
        "cpp" | "cc" | "cxx" => Some("C++"),
        "h" | "hpp" => Some("C/C++ Header"),
        "rb" => Some("Ruby"),
        "php" => Some("PHP"),
        "swift" => Some("Swift"),
        "kt" => Some("Kotlin"),
        "scala" => Some("Scala"),
        "zig" => Some("Zig"),
        "html" | "htm" => Some("HTML"),
        "css" => Some("CSS"),
        "scss" | "sass" => Some("Sass"),
        "json" => Some("JSON"),
        "yaml" | "yml" => Some("YAML"),
        "toml" => Some("TOML"),
        "xml" => Some("XML"),
        "sql" => Some("SQL"),
        "sh" | "bash" | "zsh" => Some("Shell"),
        "md" | "markdown" => Some("Markdown"),
        "hbs" => Some("Handlebars"),
        _ => None,
    }
}

fn collect_language_stats(dir: &Path, stats: &mut HashMap<String, LanguageStats>) -> Result<()> {
    let entries = fs::read_dir(dir)?;
    for entry in entries {
        let entry = entry?;
        let path = entry.path();
        let name = path.file_name().unwrap_or_default().to_string_lossy();
        if name.starts_with('.') || name == "target" || name == "node_modules" || name == "vendor"
        {
            continue;
        }
        if path.is_dir() {
            collect_language_stats(&path, stats)?;
        } else if let Some(ext) = path.extension().and_then(|e| e.to_str()) {
            if let Some(language) = extension_to_language(ext) {
                let entry = stats
                    .entry(language.to_string())
                    .or_insert_with(LanguageStats::default);
                entry.files += 1;
                match fs::read_to_string(&path) {
                    Ok(content) => {
                        entry.lines += content.lines().count() as u64;
                    }
                    Err(_) => {}
                }
            }
        }
    }
    Ok(())
}

After adding both tools, rebuild with cargo build --release. Claude Code will automatically pick up the new tools the next time it initialises the server. You now have three tools that work together. Claude can ask for a language breakdown, then drill into the specific language with the largest codebase, then find which files in that language are the biggest. The tools compose naturally because they all operate on file paths.

Production Considerations

If you are going to use this server daily, or share it with your team, there are a few things worth getting right.

Logging and debugging. The tracing setup we configured writes to stderr, which means you can see logs without interfering with the protocol. Set RUST_LOG=code_stats=debug when launching the server to get verbose output. If your server is not showing up in Claude Code or tools are failing silently, check stderr output first. You can also run claude mcp list to verify the server is registered and check its status.

Error handling. Notice that our tools return user-friendly error messages as Ok(String) rather than propagating errors with ? at the top level. This is intentional. If a tool returns an Err, the MCP client sees a protocol-level error. If it returns Ok with an error message in the string, the AI can read the error and react intelligently. It might try a different path, ask the user for clarification, or explain what went wrong. Reserve Err for truly unrecoverable situations.

Input validation. Always validate paths. Our tools check that paths exist and are directories before traversing them. In a production server, you might also want to canonicalise paths and restrict them to certain directories to prevent the AI from accidentally scanning sensitive locations. When you are ready to move beyond local stdio and run your server as a shared service, our guide on MCP servers in production covers HTTP transport, containerisation, health checks, and graceful shutdown.

Testing. You can test MCP tools as regular async Rust functions. The tool methods on your server struct are just methods. Call them directly in tests with constructed input.

#[cfg(test)]
mod tests {
    use super::*;

    #[tokio::test]
    async fn test_count_lines_nonexistent_path() {
        let server = CodeStatsServer;
        let input = Json(CountLinesInput {
            path: "/nonexistent/path".to_string(),
            extension: "rs".to_string(),
        });
        let result = server.count_lines(input).await.unwrap();
        assert!(result.contains("does not exist"));
    }

    #[tokio::test]
    async fn test_language_mapping() {
        assert_eq!(extension_to_language("rs"), Some("Rust"));
        assert_eq!(extension_to_language("py"), Some("Python"));
        assert_eq!(extension_to_language("xyz"), None);
    }
}

Run tests with cargo test. Because our tools are pure functions over the filesystem, they are straightforward to test. You could also create temporary directories with known files for integration tests.

Performance. For large repositories, file traversal can take a noticeable amount of time. The async runtime helps here because rmcp can continue handling protocol messages while your tool function is running. If you needed to scan truly massive directories, you could add progress reporting through MCP's built-in notification system, but for most codebases the synchronous fs::read_dir approach is fast enough.

Governance and management. Building a custom MCP server is the right choice when you need domain-specific tools that do not exist yet. But as your team accumulates multiple MCP servers, the operational question shifts from "how do I build this?" to "how do I manage all of these?" Which servers are running, who has access, and are they behaving correctly? This is where governance infrastructure like systemprompt.io fits in. It provides a control plane for managing MCP servers, skills, and AI tool access across teams, so you can focus on building the tools themselves rather than the plumbing around deployment, access control, and observability. Disclosure: this guide is published by systemprompt.io.

The Lesson

Building this server reveals something important about AI tool integration. The best MCP tools are not wrappers around APIs that Claude Code could call directly. They are tools that encode domain knowledge. Our count_lines tool knows to skip target and node_modules. Our language_breakdown tool knows that .tsx is TSX and .hbs is Handlebars. This knowledge is embedded in the tool so the AI does not have to figure it out every time.

This is a different mental model from writing a REST API. With an API, you want to be generic and let the client decide how to use it. With an MCP tool, you want to be opinionated and make the right thing easy. The AI is your user, and it works best when your tools give clear, structured, context-rich responses.

Rust is particularly well-suited to this because the type system forces you to think about your tool's contract upfront. The JsonSchema derive makes that contract explicit and machine-readable. The compiler catches errors before they reach the AI. And the resulting binary is fast enough that the AI never has to wait for your tool to respond.

The MCP specification continues to evolve. Streamable HTTP transport enables remote MCP servers that multiple clients can connect to. Resource subscriptions let servers push updates. The protocol is growing, and having a solid Rust server as your foundation means you can adopt new features as they land in the rmcp crate.

Conclusion

We built a complete MCP server in Rust that provides three practical tools for analysing codebases. We used the rmcp crate's macro system to define tools declaratively, implemented the server handler trait, and connected everything to Claude Code over stdio transport. The entire server compiles to a single binary with no runtime dependencies.

The full source code for this guide is about 250 lines of Rust. That is all it takes to meaningfully extend what Claude Code can do. If you want to go further, here are some ideas. Add a tool that searches for TODO comments and ranks them by file. Add a tool that computes cyclomatic complexity for Rust functions. Add a resource that exposes your project's Cargo.toml as structured data. The rmcp documentation covers resources and prompts in addition to the tools we used here.

The official MCP build server guide is a good next reference if you want to understand the protocol at a deeper level. And if you are already using Claude Code with MCP servers, the Claude Code MCP documentation covers advanced configuration including environment variables, permission scoping, and server lifecycle management.

Your AI assistant is only as capable as the tools it has access to. Now you know how to give it new ones.