Post

Building Your First MCP Server, A Complete Beginner's Guide

Building Your First MCP Server, A Complete Beginner's Guide

Building Your First MCP Server: A Complete Beginner’s Guide

Have you ever wondered how AI agents like Claude connect to external tools and data sources? The answer lies in something called the Model Context Protocol (MCP). Today, I’ll walk you through building a simple but complete MCP server from scratch, explaining every piece along the way.

What is MCP and Why Should You Care?

The Model Context Protocol is like a universal language that lets AI agents talk to your applications and data sources. Think of it as a bridge between AI and your tools - whether that’s a database, an API, or any service you want the AI to interact with.

Why is this important?

  • AI agents can access real-time data
  • You can extend AI capabilities with your own tools
  • It’s becoming the standard way to integrate AI with external systems

What We’re Building Today

We’ll create a simple weather MCP server that:

  • Responds to weather requests for any city
  • Shows detailed logs of every request and response
  • Follows the complete MCP specification
  • Works with any MCP-compatible AI agent

The best part? We’ll use only Python’s built-in libraries - no external dependencies needed!

Understanding the MCP Flow

Before we dive into code, let’s understand how MCP works:

1
2
3
1. AI Agent → "initialize" → MCP Server
2. AI Agent → "tools/list" → MCP Server (what can you do?)
3. AI Agent → "tools/call" → MCP Server (do something specific)

It’s that simple! Every MCP conversation follows this pattern.

Building the Server: Step by Step

Step 1: Setting Up the Foundation

1
2
3
4
5
6
import json
import logging
import sys
from http.server import HTTPServer, BaseHTTPRequestHandler
from datetime import datetime
import random

We start with Python’s standard libraries. The http.server module gives us everything we need to create a web server, while logging helps us see what’s happening under the hood.

Step 2: Creating Our Weather Data

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
self.weather_data = {
    "london": {
        "temperature": 18,
        "humidity": 65,
        "condition": "Cloudy",
        "wind_speed": 12
    },
    "new_york": {
        "temperature": 22,
        "humidity": 58,
        "condition": "Sunny", 
        "wind_speed": 8
    },
    # ... more cities
}

For this demo, we’re using fake weather data. In a real application, you’d connect to an actual weather API. But fake data is perfect for learning - it’s predictable and always available!

Step 3: The MCP Protocol Handlers

Every MCP server needs to handle three core requests:

Initialize Request

1
2
3
4
5
6
7
8
9
10
11
def handle_initialize(self, request_data):
    response = {
        "jsonrpc": "2.0",
        "id": request_data.get("id"),
        "result": {
            "protocolVersion": "2024-11-05",
            "serverInfo": self.server_info,
            "capabilities": {"tools": {}}
        }
    }
    return response

This is like a handshake. The AI agent says “Hello, I want to use MCP” and our server responds with “Hi! I’m a weather server and here’s what I can do.”

List Tools Request

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def handle_list_tools(self, request_data):
    tools = [{
        "name": "get_weather",
        "description": "Get current weather information for a city",
        "inputSchema": {
            "type": "object",
            "properties": {
                "city": {
                    "type": "string",
                    "description": "City name to get weather for"
                }
            },
            "required": ["city"]
        }
    }]
    # ... return tools in response

This is where we advertise our capabilities. We’re telling the AI agent: “I have one tool called ‘get_weather’ that needs a city name as input.”

Call Tool Request

1
2
3
4
5
6
7
def handle_call_tool(self, request_data):
    params = request_data.get("params", {})
    tool_name = params.get("name")
    arguments = params.get("arguments", {})
    
    if tool_name == "get_weather":
        return self._get_weather(request_data, arguments)

This is where the actual work happens. The AI agent says “Please use the get_weather tool with city = ‘London’” and we respond with the weather data.

Step 4: The Weather Logic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
def _get_weather(self, request_data, arguments):
    city = arguments.get("city", "").lower().strip()
    
    if city in self.weather_data:
        # Use our predefined data with some randomness
        base_data = self.weather_data[city].copy()
        base_data["temperature"] += random.randint(-3, 3)
        weather_info = {
            "city": city.title(),
            "timestamp": datetime.now().isoformat(),
            "weather": base_data
        }
    else:
        # Generate mock data for unknown cities
        weather_info = {
            "city": city.title(),
            "timestamp": datetime.now().isoformat(),
            "weather": {
                "temperature": random.randint(10, 30),
                "humidity": random.randint(40, 90),
                "condition": random.choice(["Sunny", "Cloudy", "Rainy"]),
                "wind_speed": random.randint(5, 20)
            },
            "note": "Mock data - city not in database"
        }

We check if we have data for the requested city. If we do, we use it (with small random variations to make it feel realistic). If not, we generate completely random weather data - but we’re honest about it being fake!

Step 5: HTTP Server Setup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class MCPRequestHandler(BaseHTTPRequestHandler):
    def do_POST(self):
        # Read the request
        content_length = int(self.headers.get('Content-Length', 0))
        request_body = self.rfile.read(content_length).decode('utf-8')
        
        # Parse JSON
        request_data = json.loads(request_body)
        
        # Route to appropriate handler
        method = request_data.get("method")
        if method == "initialize":
            response = self.mcp_server.handle_initialize(request_data)
        elif method == "tools/list":
            response = self.mcp_server.handle_list_tools(request_data)
        elif method == "tools/call":
            response = self.mcp_server.handle_call_tool(request_data)
        
        # Send response
        self._send_json_response(response)

This is the “plumbing” that connects HTTP requests to our MCP handlers. When a request comes in, we parse the JSON, figure out what the AI agent wants, and route it to the right handler.

The Magic of Detailed Logging

One of the best features of our server is the extensive logging. Here’s what you’ll see:

1
2
3
4
5
6
7
Incoming POST request from 127.0.0.1
MCP method: tools/call
Tool called: get_weather
Tool arguments: {"city": "london"}
Getting weather for city: london
Weather data found for london
Response: {"result": {"content": [{"text": "Weather for London..."}]}}

This makes debugging incredibly easy and helps you understand exactly what’s happening.

Testing Your Server

Once you implement the server following the patterns shown above, you can test it with curl commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Health check
curl http://localhost:8080/health

# Get weather for London
curl -X POST http://localhost:8080 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "id": 3,
    "method": "tools/call",
    "params": {
      "name": "get_weather",
      "arguments": {"city": "london"}
    }
  }'

Understanding the Response Format

When you request weather for London, you’ll get back something like this:

1
2
3
4
5
6
7
8
9
10
11
12
{
  "jsonrpc": "2.0",
  "id": 3,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Weather for London:\nTemperature: 20°C\nCondition: Cloudy\nHumidity: 68%\nWind Speed: 14 km/h\nLast Updated: 2024-06-11T10:30:00"
      }
    ]
  }
}

The MCP specification requires responses to be wrapped in a content array. This allows for rich responses with different content types (text, images, etc.).

Error Handling Done Right

Our server handles errors gracefully:

1
2
3
4
5
6
7
8
9
def _error_response(self, request_id, code, message):
    return {
        "jsonrpc": "2.0",
        "id": request_id,
        "error": {
            "code": code,
            "message": message
        }
    }

Common error scenarios:

  • Unknown MCP method → Code -32601
  • Invalid tool name → Code -32601
  • Missing required parameters → Code -32602

Next Steps

Now that you understand the basics:

  1. Try modifying the weather data
  2. Add a new tool (maybe a joke generator?)
  3. Connect to a real API instead of using mock data
  4. Experiment with different response formats

The MCP ecosystem is growing rapidly, and understanding how to build servers puts you at the forefront of AI integration technology.

Key Takeaways

  1. MCP is simple - just three main request types to handle
  2. Logging is crucial - you need to see what’s happening
  3. Error handling matters - AI agents need clear error messages
  4. Start simple - you can always add complexity later

Remember: the best way to learn MCP is to build with it. Start simple, add logging, and watch how the protocol flows. Before you know it, you’ll be building sophisticated AI-powered integrations.

Happy coding!


Hi there 👋 Support me!

Life is an echo—what you send out comes back.

Donate

This post is licensed under CC BY 4.0 by the author.