What is Model Context Protocol (MCP)?
Model Context Protocol (MCP) is an open standard that connects LLM applications to the systems where your data and tools live. Think of it as a universal adapter, like USB-C, but for agentic systems. Just as USB-C provides a standard way to connect devices to peripherals, MCP offers a standardized, secure, and simple way to connect LLM agents like Claude to real-world applications, databases, and services.
This enables LLM agents to move beyond their training data, giving them the context they need to read files from your computer, search through internal knowledge bases, or update tasks in project management tools.
The Problem MCP Solves
Before MCP, LLM agents were capable but limited to information you manually provided. They couldnโt:
- Access real-time information
- Interact with external tools and services (APIs, databases)
- Perform actions beyond text generation
- Connect to specialized software securely
Every integration required bespoke, custom development, making it difficult and time-consuming to scale agentic capabilities. MCP solves this with a single, open protocol, creating a growing ecosystem of interoperable LLM applications and tools.
How MCP Works
MCP creates a standardized bridge between an LLM application (Host) and external Servers. The Host (e.g., Claude Desktop, VS Code) runs Clients, with each client maintaining a dedicated, one-to-one connection to a server.
graph LR
H[LLM Host] --> C1[Client 1]
H --> C2[Client 2]
C1 --> S1[Local Server]
C2 --> S2[Remote Server]
S1 --> R1[Local Files]
S2 --> R2[Remote API]
Key Architectural Layers
- MCP Host: The LLM application (like Claude or VS Code) that manages one or more MCP clients. It handles the user interface, security policies, and LLM integration.
- MCP Client: A component within the host that connects to a single MCP server, handling protocol negotiation and routing messages.
- MCP Server: A program that provides context and capabilities by exposing tools, resources, and prompts. Servers can run locally (stdio transport) or remotely (Streamable HTTP transport).
- Transport Layer: Defines the communication channel:
stdio: For local processes, exchanging JSON-RPC messages via stdin/stdoutStreamable HTTP: For remote services, using POST/GET requests with SSE support
- Data Layer: Defines the JSON-RPC 2.0 based protocol for exchanging messages
Core Concepts (The Building Blocks)
Servers provide functionality through three core primitives, each with different control:
| Primitive | Who Controls It | Purpose | Real-World Example |
|---|---|---|---|
| Tools | Model-controlled | Enables agents to perform actions | Search flights, send messages, create files |
| Resources | Application-controlled | Provides data for context | Documents, calendars, database schemas |
| Prompts | User-controlled | Reusable interaction templates | /plan-vacation, /summarize-meetings |
1. Tools (Agent Actions)
Functions that LLM agents can call to perform actions. Each tool has a schema-defined interface. The model requests tool execution, but users must provide explicit approval.
{
"name": "searchFlights",
"description": "Search for available flights",
"inputSchema": {
"type": "object",
"properties": {
"origin": { "type": "string", "description": "Departure city" },
"destination": { "type": "string", "description": "Arrival city" },
"date": { "type": "string", "format": "date" }
},
"required": ["origin", "destination", "date"]
}
}
2. Resources (Context Data)
Data sources that LLM agents can read to gain context. Resources are identified by URI and can be anything from local files to API endpoints. The application decides how to retrieve and use this data.
{
"uri": "file:///Documents/Travel/passport.pdf",
"name": "passport.pdf",
"mimeType": "application/pdf"
}
3. Prompts (Interaction Templates)
Reusable, user-controlled templates that structure common tasks. They accept arguments to create consistent workflows. Users typically invoke them via slash commands or command palette.
{
"name": "plan-vacation",
"title": "Plan a vacation",
"description": "Guide through the vacation planning process",
"arguments": [
{ "name": "destination", "type": "string", "required": true },
{ "name": "duration", "type": "number", "description": "days" }
]
}
Protocol Implementation Details
Communication Format
- All messages use JSON-RPC 2.0 format
- UTF-8 encoded, newline-delimited in stdio mode
- Support for requests, responses, and notifications
Lifecycle & Handshake
Connection establishment follows a strict sequence:
- Client sends
initializerequest with capabilities - Server responds with its capabilities
- Client sends
initializednotification - Connection is ready for operation
Capability Negotiation
During initialization, both sides declare supported features:
- Tools, resources, prompts support
- Sampling capabilities (server requesting LLM inference)
- Root directory awareness
- Authorization mechanisms
Benefits of MCP
For Users
- Natural Language Interface: Interact with complex tools conversationally
- Access to Your Context: Agents can securely access your documents, data, and tools
- Real-time Results: Live data and immediate analysis
- Secure by Design: Full control with explicit permission for every action
For Developers
- Standardized API: Build once, work with all MCP-compatible LLM clients
- Reduced Complexity: Focus on features instead of custom connectors
- Growing Ecosystem: Leverage open-source servers from Anthropic and community
- Future-proof: Open standard ensures compatibility as LLM technology evolves
MCP vs. Traditional Approaches
| Aspect | Traditional Tools | MCP-Enabled Tools |
|---|---|---|
| Interface | Command line, GUI | Natural language |
| Learning Curve | Steep, tool-specific | Minimal, conversational |
| Integration | Manual scripting | Automatic discovery |
| Error Handling | Manual debugging | Agent-assisted troubleshooting |
| Workflow | Linear, rigid | Flexible, composable |
Real-World Example: Spatial Transcriptomics
Traditional Workflow (CLI)
# 1. Find relevant files
grep -r "Q3-roadmap" ~/Documents/project-alpha
# 2. Open and read each file
cat ~/Documents/project-alpha/meeting-notes.md
# 3. Load spatial data
import scanpy as sc
adata = sc.read_h5ad("data.h5ad")
# 4. Preprocess
sc.pp.normalize_total(adata)
sc.pp.log1p(adata)
# 5. Analyze
import squidpy as sq
sq.gr.spatial_neighbors(adata)
sq.gr.spatial_autocorr(adata)
MCP-Powered Workflow with ChatSpatial
๐ค User: "Load my Visium data and identify spatial domains"
๐ค Agent: I'll analyze your spatial transcriptomics data:
1. Loading your Visium dataset
2. Performing preprocessing
3. Identifying spatial domains using SpaGCN
4. Creating visualizations
[Agent automatically executes using ChatSpatial MCP server]
MCP Ecosystem
Compatible LLM Clients
- Claude Desktop & Claude.ai: Native MCP support
- VS Code: Integration via GitHub Copilot
- LM Studio: Connect local models to tools
- Cursor, Warp, Zed: Agent-native editors with MCP support
Popular MCP Servers
- File Systems: Secure access to local and cloud files
- Databases: PostgreSQL, SQLite, MongoDB
- APIs: GitHub, Slack, Google Drive, Sentry
- Development Tools: Git, Docker, Kubernetes
- Scientific Tools: ChatSpatial for spatial transcriptomics
Getting Started
1. Install an MCP Client
Download Claude Desktop or VS Code.
2. Configure MCP Servers
Configure your clientโs config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json
Example configuration for ChatSpatial:
{
"mcpServers": {
"chatspatial": {
"command": "/path/to/chatspatial_env/bin/python",
"args": ["-m", "chatspatial"]
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/data"
]
}
}
}
3. Start Using Tools
Restart your client. Youโll see an indicator that MCP tools are available. Start asking your LLM agent to perform tasks naturally.
Security and Authorization
Security is a core principle of MCP:
- Explicit User Consent: No tool execution without approval
- Granular Permissions: Configure which tools agents can use
- Sandboxing: Isolated server connections
- Secure Authorization: OAuth 2.1 for remote servers
- Local-First Option: Keep sensitive data on your machine with stdio transport
Key Code Patterns (Python SDK Example)
Server Implementation - Defining Tools
from mcp.server.fastmcp import FastMCP
# Initialize server
mcp = FastMCP("weather_server")
@mcp.tool()
async def get_forecast(latitude: float, longitude: float) -> str:
"""
Get weather forecast for a location.
Args:
latitude: Latitude of the location.
longitude: Longitude of the location.
"""
# Implementation
forecast_data = await call_weather_api(latitude, longitude)
return f"Forecast: {forecast_data}"
if __name__ == "__main__":
mcp.run(transport='stdio')
The @mcp.tool() decorator automatically converts functions into MCP tools, parsing function names, types, and docstrings into tool schemas.
The Future of MCP (Roadmap)
MCP is rapidly evolving with focus on:
- Agents: Support for long-running, asynchronous operations
- Authentication & Security: Enterprise-grade authorization and SSO
- Validation & Tooling: Reference implementations and compliance testing
- Registry: Centralized server discovery and distribution
- Multimodality: Support for video and streaming interactive experiences
Learn More
๐ Official Resources
- ๐ Official Website: modelcontextprotocol.io - Complete documentation
- ๐ Specification: spec.modelcontextprotocol.io - Technical specification
- ๐ป GitHub: github.com/modelcontextprotocol - Open source repositories
- ๐ฌ Discord: Join Discord - Community discussion
๐ฅ Video Resources
- ๐บ Simple MCP Demo: YouTube Tutorial - Clear explanation with examples
- ๐ฌ Claude Code Introduction: Official Video - Anthropic announcement
- ๐ Claude Code Tutorial: Setup Guide - Installation and usage
๐ ๏ธ Developer Resources
- ๐ฆ MCP Servers Collection: modelcontextprotocol/servers - Reference implementations
- ๐จ MCP SDKs: TypeScript, Python, Go, Kotlin, Swift, Java, C#, Ruby, Rust
- ๐งฌ ChatSpatial Example: See MCP enabling spatial transcriptomics analysis
Ready to unlock the full potential of your LLM agent? Try ChatSpatial with Claude Desktop and experience how natural language transforms spatial transcriptomics workflows!