Building MCP Servers: Extending Claude’s Capabilities with the Model Context Protocol
The artificial intelligence landscape is undergoing a fundamental shift. While large language models have achieved remarkable advances in reasoning and quality, they’ve remained constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source has historically required its own custom integration, making truly connected AI systems difficult to scale.
Enter the Model Context Protocol (MCP)—an open standard developed by Anthropic that’s rapidly becoming the “USB-C port for AI applications.” Just as USB-C revolutionized device connectivity with a universal standard, MCP provides a standardized way to connect AI applications to external systems, tools, and data sources.
Understanding the MCP Architecture
At its core, MCP follows a client-server architecture with three key participants:
MCP Host: The AI application that coordinates and manages connections—this could be Claude Desktop, Claude Code, or any AI-powered tool that supports MCP.
MCP Client: A component instantiated by the host that maintains a dedicated connection to an MCP server and obtains context for the host to use.
MCP Server: A program that provides context to MCP clients. These can run locally (using STDIO transport) or remotely (using HTTP transport).
The protocol operates across two layers. The data layer implements JSON-RPC 2.0-based communication, handling lifecycle management, tools, resources, and prompts. The transport layer manages communication channels, supporting both STDIO for local processes and Streamable HTTP for remote servers.
This architecture enables AI systems to maintain context as they move between different tools and datasets, replacing fragmented integrations with a more sustainable, scalable approach.
The Three Core Primitives
MCP defines three fundamental primitives that servers can expose:
Tools are executable functions that AI applications can invoke to perform actions—file operations, API calls, database queries, or any computational task. They’re the “do” of MCP.
Resources provide contextual information to AI applications—file contents, database records, API responses, or any data the AI needs to reference. They’re the “know” of MCP.
Prompts are reusable templates that help structure interactions with language models—system prompts, few-shot examples, or specialized instruction sets. They’re the “how” of MCP.
Additionally, MCP supports client primitives that enable richer interactions: sampling (requesting LLM completions from the host), elicitation (requesting user input), and logging (debugging and monitoring).
Building Your First MCP Server: A Weather Example
Let’s walk through building a practical MCP server using Python and the FastMCP SDK. This server will provide weather data from the National Weather Service API.
Setup and Dependencies
First, install the MCP SDK (version 1.2.0 or higher required):
pip install mcp
Import the necessary packages:
from typing import Any
import httpx
from mcp.server.fastmcp import FastMCP
# Initialize FastMCP server
mcp = FastMCP("weather")
# Constants
NWS_API_BASE = "https://api.weather.gov"
USER_AGENT = "weather-app/1.0"
Implementing Helper Functions
Add functions to query and format weather data:
async def make_nws_request(url: str) -> dict[str, Any] | None:
"""Make a request to the NWS API with proper error handling."""
headers = {"User-Agent": USER_AGENT, "Accept": "application/geo+json"}
async with httpx.AsyncClient() as client:
try:
response = await client.get(url, headers=headers, timeout=30.0)
response.raise_for_status()
return response.json()
except Exception:
return None
def format_alert(feature: dict) -> str:
"""Format an alert feature into a readable string."""
props = feature["properties"]
return f"""
Event: {props.get("event", "Unknown")}
Area: {props.get("areaDesc", "Unknown")}
Severity: {props.get("severity", "Unknown")}
Description: {props.get("description", "No description available")}
Instructions: {props.get("instruction", "No specific instructions provided")}
"""
Creating MCP Tools
The @mcp.tool() decorator automatically generates tool definitions from Python type hints and docstrings:
@mcp.tool()
async def get_alerts(state: str) -> str:
"""Get weather alerts for a US state.
Args:
state: Two-letter US state code (e.g. CA, NY)
"""
url = f"{NWS_API_BASE}/alerts/active/area/{state}"
data = await make_nws_request(url)
if not data or "features" not in data:
return "Unable to fetch alerts or no alerts found."
if not data["features"]:
return "No active alerts for this state."
alerts = [format_alert(feature) for feature in data["features"]]
return "\n---\n".join(alerts)
@mcp.tool()
async def get_forecast(latitude: float, longitude: float) -> str:
"""Get weather forecast for a location.
Args:
latitude: Latitude of the location
longitude: Longitude of the location
"""
points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
points_data = await make_nws_request(points_url)
if not points_data:
return "Unable to fetch forecast data for this location."
forecast_url = points_data["properties"]["forecast"]
forecast_data = await make_nws_request(forecast_url)
if not forecast_data:
return "Unable to fetch detailed forecast."
periods = forecast_data["properties"]["periods"]
forecasts = []
for period in periods[:5]:
forecast = f"""
{period["name"]}:
Temperature: {period["temperature"]}°{period["temperatureUnit"]}
Wind: {period["windSpeed"]} {period["windDirection"]}
Forecast: {period["detailedForecast"]}
"""
forecasts.append(forecast)
return "\n---\n".join(forecasts)
Running the Server
Finally, initialize and run the server:
def main():
mcp.run(transport="stdio")
if __name__ == "__main__":
main()
Critical Implementation Details
When building STDIO-based MCP servers, never write to stdout. The JSON-RPC protocol uses stdout for communication, so any console output will corrupt messages and break your server.
# ❌ Bad (STDIO)
print("Processing request")
# ✅ Good (STDIO)
import sys
print("Processing request", file=sys.stderr)
# ✅ Also good
import logging
logging.info("Processing request")
For HTTP-based servers, standard output logging is fine since it doesn’t interfere with HTTP responses.
The Growing MCP Ecosystem
The MCP ecosystem is expanding rapidly. The official MCP servers repository includes reference implementations for popular systems:
- Git – Tools to read, search, and manipulate Git repositories
- Filesystem – Secure file operations with configurable access controls
- PostgreSQL – Read-only database access with schema inspection
- Puppeteer – Browser automation and web scraping
- Slack – Channel management and messaging capabilities
- Google Drive – File access and search capabilities
Development tools companies including Zed, Replit, Codeium, and Sourcegraph are integrating MCP to enhance their platforms, enabling AI agents to better retrieve relevant information and understand coding context.
Early enterprise adopters like Block and Apollo have already integrated MCP into their systems, recognizing its potential for building agentic systems that “remove the burden of the mechanical so people can focus on the creative.”
MCP vs. Traditional API Integration
Traditional AI integrations require custom code for each data source—custom authentication, custom data parsing, custom error handling. MCP replaces this fragmentation with a single protocol.
| Aspect | Traditional Integration | MCP |
|---|---|---|
| Setup | Custom code per source | Standard protocol |
| Authentication | Implementation-specific | Standard OAuth/API keys |
| Tool Discovery | Hardcoded | Dynamic via tools/list |
| Context Sharing | Manual plumbing | Built-in primitives |
| Portability | Locked to specific AI | Works across MCP hosts |
This standardization means AI systems can maintain context as they move between different tools and datasets—a capability that’s essential for truly capable AI assistants.
Getting Started Today
Developers can start building MCP servers immediately. Claude Desktop supports local MCP servers, and all Claude.ai plans support connecting MCP servers to the desktop app.
The official resources to begin:
- MCP Documentation: modelcontextprotocol.io
- Python SDK:
pip install mcp - TypeScript SDK:
@modelcontextprotocol/sdk - Example Servers: github.com/modelcontextprotocol/servers
- MCP Inspector: Development tool for testing servers
Additional SDKs are available for C#, Go, Java, Kotlin, PHP, Ruby, Rust, and Swift, making MCP accessible across the entire development ecosystem.
Conclusion
The Model Context Protocol represents a fundamental shift in how we build AI-powered applications. By providing a universal standard for connecting AI systems to data sources, MCP eliminates the fragmentation that has historically limited AI capabilities.
Whether you’re building internal tools, enhancing existing applications, or creating entirely new AI-powered experiences, MCP provides the foundation for truly context-aware AI systems. The protocol is open, the ecosystem is growing, and the possibilities are expanding daily.
The future of AI isn’t just about more powerful models—it’s about better-connected models. MCP is the bridge that makes that future possible.
Sources: Anthropic MCP Announcement, Model Context Protocol Documentation, MCP GitHub Repository, Official MCP SDK Documentation, Block Technology Public Statements, Enterprise AI Integration Reports 2026