Python

Building an MCP Server Using Python, Docker, and Claude Code

As AI systems evolve from simple text generators to autonomous agents, the need for structured interaction between models and real-world tools has become critical. This article explores how the Model Context Protocol (MCP) enables this shift by standardizing communication between AI models and external systems, and how it compares to traditional approaches like REST APIs.

1. Introduction

MCP (Model Context Protocol) is a modern, AI-native protocol designed to standardize how large language models (LLMs) like Claude interact with external tools, APIs, databases, and services. Instead of treating the model as an isolated text generator, MCP enables it to function as an intelligent orchestrator that can reason, decide, and take actions in the real world. At its core, MCP defines a structured way for:

  • Providing context to the model (memory, files, state)
  • Invoking tools (functions, APIs, services)
  • Receiving structured outputs (not just plain text)

This makes MCP a foundational building block for creating agentic systems—where AI doesn’t just respond, but actively performs tasks.

1.1 Why should we use MCP?

  • Standardization: Instead of writing custom glue code for every tool integration, MCP provides a uniform interface for connecting models with external systems.
  • Agent Enablement: MCP allows models to “think → decide → act”, enabling autonomous workflows such as data fetching, processing, and execution.
  • Tool Abstraction: Developers can define tools once and reuse them across multiple AI applications or models.
  • Context Handling: Structured context sharing (files, variables, session state) improves response quality and continuity.
  • Interoperability: Works across different models and ecosystems, reducing vendor lock-in.

1.2 Pros

  • Separation of Concerns: Clear distinction between reasoning (LLM) and execution (tools/services).
  • Reusability: Tools can be reused across multiple agents and workflows without rewriting logic.
  • Scalability: Easily extend systems by adding new tools instead of modifying core logic.
  • Better Observability: Structured tool calls make it easier to log, monitor, and debug AI behavior.
  • Security Control: Fine-grained control over what tools an AI can access.
  • Agent-Friendly: Naturally aligns with modern agent frameworks and multi-step reasoning systems.

1.3 Cons

  • Learning Curve: Requires understanding of tool schemas, context passing, and protocol design.
  • Overhead: Adds an abstraction layer which may feel heavy for simple use-cases.
  • Ecosystem Maturity: Still evolving, with limited standardization across all vendors.
  • Latency: Multiple tool calls in a workflow can introduce additional response time.
  • Debug Complexity: Multi-step reasoning + tool execution can be harder to trace compared to simple APIs.

1.4 MCP vs REST API

FeatureMCPREST API
PurposeAI tool + context communicationGeneral client-server communication
Interaction StyleModel-driven (AI decides when to call tools)Client-driven (developer explicitly calls endpoints)
StructureTool-based protocol (functions, schemas)Resource-based endpoints (URLs)
AI NativeYes (built for LLM workflows)No (adapted for AI usage)
State HandlingSupports contextual and conversational stateTypically stateless (unless managed separately)
FlexibilityHigh for dynamic, multi-step reasoning tasksHigh for general CRUD operations
Use CaseAgents, copilots, autonomous workflowsWeb apps, microservices, integrations
ExampleAI decides to call “getWeather” tool automaticallyFrontend explicitly calls /weather endpoint

2. What is Claude Code?

Claude Code is a developer-centric interface and toolkit from Anthropic that enables direct interaction with Claude models within your development environment (such as terminal, IDEs, or local tooling). It is designed to bring AI-assisted development closer to where engineers actually work—eliminating the need to switch between tools.

Unlike traditional chat interfaces, Claude Code focuses on deep integration with your codebase, allowing the model to understand project structure, modify files, and execute multi-step development tasks.

2.1 Key Capabilities

  • Code Generation: Write functions, classes, APIs, and even full modules based on natural language prompts.
  • Debugging Assistance: Identify bugs, suggest fixes, and explain root causes in existing code.
  • Codebase Understanding: Analyze large repositories and provide contextual explanations across multiple files.
  • Refactoring: Improve code quality, restructure logic, and apply best practices automatically.
  • Script Automation: Generate scripts for DevOps, data processing, and automation tasks.
  • Test Generation: Create unit and integration tests with meaningful coverage.

2.2 Developer Experience

  • IDE/Terminal Integration: Works alongside your existing workflow without requiring a separate UI.
  • Context Awareness: Can read files, understand dependencies, and maintain session context.
  • Iterative Development: Supports back-and-forth refinement, making it ideal for complex tasks.
  • File-Level Operations: Can suggest or apply edits directly to specific files.

2.3 Claude Code + MCP (Why it matters)

When combined with MCP (Model Context Protocol), Claude Code evolves from a coding assistant into an intelligent development agent.

  • Dynamic Tool Usage: Claude can discover and invoke tools exposed via an MCP server (e.g., database queries, API calls, file processors).
  • Real-World Actions: Instead of just suggesting code, it can execute workflows—like fetching data, transforming it, and generating output.
  • Context Injection: MCP allows structured context (files, configs, environment data) to be passed to Claude, improving accuracy.
  • Automation Pipelines: Enables multi-step operations such as:
    • Read project config
    • Call a build tool
    • Analyze output
    • Fix issues automatically

3. Code Example

3.1 MCP Server (Python Backend)

3.1.1 server/app.py

This file defines the core MCP server using Flask, exposing an endpoint that dynamically routes incoming tool requests to the appropriate functions.

from flask import Flask, request, jsonify
from tools import get_weather, add_numbers

app = Flask(__name__)

@app.route("/mcp", methods=["POST"])
def mcp():
    data = request.json
    tool = data.get("tool")
    params = data.get("params", {})

    if tool == "weather":
        result = get_weather(params.get("city"))
    elif tool == "add":
        result = add_numbers(params.get("a"), params.get("b"))
    else:
        return jsonify({"error": "Unknown tool"}), 400

    return jsonify({
        "tool": tool,
        "result": result
    })

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=8000)

This Python code defines a simple MCP-compatible server using Flask that exposes a single POST endpoint at /mcp. It begins by importing necessary modules, including Flask utilities and two custom tool functions: get_weather and add_numbers. The Flask application is initialized, and a route is defined to handle incoming MCP requests. When a request is received, the server extracts the JSON payload, identifies the requested tool, and reads any associated params. Based on the tool name, it conditionally executes the corresponding function—calling get_weather when the tool is “weather” (passing the city parameter), or add_numbers when the tool is “add” (passing numerical inputs). If the tool is not recognized, the server returns an error response with a 400 status code. For valid requests, the server responds with a structured JSON object containing the tool name and the computed result. Finally, the application is configured to run on all network interfaces at port 8000, making it accessible for external MCP clients such as Claude Code to invoke these tools dynamically.

3.1.2 server/tools.py

This file contains the tool implementations that the MCP server can invoke, providing simple functions for fetching weather data and performing arithmetic operations.

def get_weather(city):
    return {
        "city": city,
        "temperature": "30°C",
        "condition": "Sunny"
    }


def add_numbers(a, b):
    return {
        "a": a,
        "b": b,
        "sum": a + b
    }

This code defines two simple utility functions that act as tools for the MCP server. The get_weather function takes a city as input and returns a mock weather response containing the city name, a fixed temperature value, and a weather condition, simulating a real-world API response without making an external call. The add_numbers function accepts two inputs a and b, performs a basic addition operation, and returns a structured JSON-like dictionary containing both input values along with their computed sum. These functions are intentionally lightweight and deterministic, making them ideal examples of how tools can be defined and exposed in an MCP architecture, where the main server dynamically invokes them based on the requested tool name.

3.1.3 server/requirements.txt

This file lists the project dependencies, ensuring the correct version of Flask is installed for running the MCP server.

flask==3.0.0

3.1.4 server/Dockerfile

This Dockerfile defines how to containerize the MCP server, setting up the Python environment, installing dependencies, and configuring the application to run on port 8000.

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["python", "app.py"]

The requirements.txt file specifies the project dependency, in this case pinning flask==3.0.0 to ensure consistent behavior across environments by installing a fixed version of the Flask framework. The accompanying Dockerfile defines how to containerize the MCP server application using a lightweight Python 3.11 base image. It sets the working directory to /app, copies the dependency file, and installs the required packages using pip. এরপর, it copies the entire application code into the container, exposes port 8000 to allow external access, and finally defines the default command to run the Flask application using python app.py. Together, these files enable reproducible builds and make it easy to deploy the MCP server consistently across different environments using Docker.

3.1.5 MCP Server (Declarative Approach using FastMCP)

While the previous example demonstrates how to build an MCP server manually using Flask and explicit routing logic, modern MCP libraries allow you to define tools declaratively, reducing boilerplate and letting the framework handle tool registration and dispatch automatically. Below is the same MCP server implemented using a declarative MCP-style library:

from fastmcp import FastMCP

mcp = FastMCP(name="demo-server")

@mcp.tool()
def weather(city: str):
    return {
        "city": city,
        "temperature": "30°C",
        "condition": "Sunny"
    }

@mcp.tool()
def add(a: int, b: int):
    return {
        "a": a,
        "b": b,
        "sum": a + b
    }

if __name__ == "__main__":
    mcp.run(port=8000)

In this declarative approach, tools are defined using simple Python functions annotated with @mcp.tool(), eliminating the need for manual request parsing, routing, and conditional logic, while the framework automatically registers tools and exposes them via MCP, validates input schemas based on function signatures, handles request routing and response formatting, and ensures compatibility with MCP clients like Claude.

3.2 Claude MCP Client (AI Agent Layer)

3.2.1 client/claude_client.py

This file implements the Claude client that connects the AI model with the MCP server, enabling it to send prompts, interpret tool calls, and execute them via HTTP requests.

import anthropic
import requests
import json

MCP_URL = "http://localhost:8000/mcp"

client = anthropic.Anthropic(
    api_key="YOUR_ANTHROPIC_API_KEY"
)

# Call MCP server
def call_mcp(tool, params):
    res = requests.post(MCP_URL, json={
        "tool": tool,
        "params": params
    })
    return res.json()


tools = [
    {
        "name": "weather",
        "description": "Get weather for a city",
        "input_schema": {
            "type": "object",
            "properties": {
                "city": {"type": "string"}
            }
        }
    },
    {
        "name": "add",
        "description": "Add two numbers",
        "input_schema": {
            "type": "object",
            "properties": {
                "a": {"type": "number"},
                "b": {"type": "number"}
            }
        }
    }
]


message = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=500,
    tools=tools,
    messages=[
        {
            "role": "user",
            "content": "Tell me weather of Delhi and add 15 and 27"
        }
    ]
)

# Handle tool calls
for block in message.content:
    if block.type == "tool_use":
        tool_name = block.name
        params = block.input

        result = call_mcp(tool_name, params)

        print("\nTool Used:", tool_name)
        print("Result:", json.dumps(result, indent=2))

This client-side Python script demonstrates how to integrate an AI model with an MCP server to enable dynamic tool usage. It begins by importing required libraries, including the Anthropic SDK for interacting with Claude, requests for making HTTP calls, and json for formatting responses. The MCP_URL points to the locally running MCP server endpoint. An Anthropic client is initialized using an API key, which allows sending prompts to the Claude model. The call_mcp function is defined to send a POST request to the MCP server with the selected tool name and parameters, returning the JSON response. Next, a list of tool definitions is created, where each tool (e.g., “weather” and “add”) includes a name, description, and input schema—this schema helps Claude understand how to call the tool correctly. The script then sends a user query to Claude using client.messages.create, passing the available tools so the model can decide when to invoke them. Instead of returning only text, Claude may respond with structured tool calls. The script iterates through the response blocks, detects any tool_use actions, extracts the tool name and input parameters, and forwards them to the MCP server using the call_mcp function. Finally, it prints the tool used and the result returned by the server. This demonstrates a full loop where the AI model reasons about the task, decides which tools to use, and delegates execution to an external MCP server.

3.2.2 docker-compose.yml

This file defines the Docker Compose configuration to build and run the MCP server service, exposing it on port 8000 for external access.

version: "3.9"

services:
  mcp-server:
    build: ./server
    ports:
      - "8000:8000"

This docker-compose.yml file defines a simple multi-container setup (in this case, a single service) to run the MCP server using Docker Compose. It specifies the Compose file format version 3.9 and declares a service named mcp-server. The build directive points to the ./server directory, indicating that Docker should build the image using the Dockerfile located there. The ports configuration maps port 8000 of the container to port 8000 on the host machine, allowing external applications—such as the Claude client—to communicate with the MCP endpoint via http://localhost:8000/mcp. This setup simplifies running and managing the server by encapsulating build and runtime configuration in a single file, making it easy to start the service with a single command like docker-compose up.

3.3 Run the System

3.3.1 Start MCP Server

This command builds and starts the MCP server using Docker Compose, making the API available at the specified local endpoint.

docker compose up --build

Server runs at:

http://localhost:8000/mcp

This step explains how to start the MCP server using Docker Compose. The command docker compose up --build triggers Docker to first build the server image from the Dockerfile (if not already built or if there are changes) and then start the containerized service defined in the docker-compose.yml file. Once the container is up and running, the Flask-based MCP server begins listening for incoming HTTP requests on port 8000. The endpoint http://localhost:8000/mcp serves as the main interface where client applications—such as the Claude client—can send tool invocation requests in JSON format. This setup ensures the server is isolated, reproducible, and easily deployable across environments while remaining accessible via a standard local URL.

3.3.2 Run Claude Client

This command runs the Claude client script, which sends a prompt to the model and interacts with the MCP server to execute tool calls.

cd client
python claude_client.py

This step explains how to run the Claude client that interacts with the MCP server. The command cd client navigates into the directory containing the client script, and python claude_client.py executes the program. When run, the script sends a user prompt to the Claude model along with the available tool definitions. Based on the query, the model may decide to invoke one or more tools, such as fetching weather data or performing a calculation. The client captures these tool calls, forwards them to the MCP server via HTTP requests, and receives the results in response. It then prints the tool usage and corresponding output to the console, effectively demonstrating how an AI model can dynamically delegate tasks to external services through the MCP architecture.

3.4 Code Output

User Input to Claude: Tell me weather of Delhi and add 15 and 27. Claude decides tools and give response like –

Tool Used: weather
Result:
{
  "city": "Delhi",
  "temperature": "30°C",
  "condition": "Sunny"
}

Tool Used: add
Result:
{
  "a": 15,
  "b": 27,
  "sum": 42
}

This example demonstrates how the end-to-end MCP workflow operates when a user provides a natural language query. The input Tell me weather of Delhi and add 15 and 27 is sent to the Claude model along with the available tool definitions. Instead of responding with plain text, Claude intelligently interprets the request and decides to invoke two tools: first the weather tool to fetch weather details for Delhi, and then the add tool to compute the sum of the numbers. Each tool call is structured and executed via the MCP server, which processes the request and returns JSON responses. The client then prints these results, clearly showing which tool was used and the corresponding output. This highlights the key strength of MCP—allowing the AI model to break down a complex query into multiple actionable steps, delegate execution to external tools, and return structured, reliable results instead of just generating text.

4. Conclusion

MCP provides a powerful abstraction layer for connecting AI models with tools and services. When combined with Python for backend logic, Docker for deployment, and Claude Code for intelligent orchestration, it enables a scalable and production-ready AI agent architecture. While REST APIs remain the backbone of traditional systems, MCP is emerging as a specialized protocol for AI-native applications, where models are not just consumers of data—but active participants in workflows.

Yatin Batra

An experience full-stack engineer well versed with Core Java, Spring/Springboot, MVC, Security, AOP, Frontend (Angular & React), and cloud technologies (such as AWS, GCP, Jenkins, Docker, K8).
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button