2025 EDITION: The Illustrated Guidebook
2025 EDITION: The Illustrated Guidebook
2025 EDITION EE
Scan the QR code below or open this link to start the assessment. It will only take
2 minutes to complete.
https://bit.ly/mcp-assessment
1
DailyDoseofDS.com
Table of contents
2
DailyDoseofDS.com
Model Context
Protocol
(MCP)
3
DailyDoseofDS.com
What is MCP?
Imagine you only know English. To get info from a person who only knows:
4
DailyDoseofDS.com
It lets you (Agents) talk to other people (tools or other capabilities) through a
single interface.
If they need to access real-time information, they must use external tools and
resources on their own.
5
DailyDoseofDS.com
If you had three AI applications and three external tools, you might end up
writing nine different integration modules (each AI x each tool) because there
was no common standard. This doesn’t scale.
Developers of AI apps were essentially reinventing the wheel each time, and tool
providers had to support multiple incompatible APIs to reach different AI
platforms.
6
DailyDoseofDS.com
The problem
Before MCP, the landscape of connecting AI to external data and actions looked
like a patchwork of one-off solutions.
Either you hard-coded logic for each tool, managed prompt chains that were not
robust, or you used vendor-specific plugin frameworks.
The diagram below illustrates this complexity: each AI (each “Model”) might
require unique code to connect to each external service (database, filesystem,
calculator, etc.), leading to spaghetti-like interconnections.
The solution
MCP tackles this by introducing a standard interface in the middle. Instead of M
× N direct integrations, we get M + N implementations: each of the M AI
7
DailyDoseofDS.com
applications implements the MCP client side once, and each of the N tools
implements an MCP server once.
Now everyone speaks the same “language”, so to speak, and a new pairing doesn’t
require custom code since they already understand each other via MCP.
● On the left (pre-MCP), every model had to wire into every tool.
● On the right (with MCP), each model and tool connects to the MCP layer,
drastically simplifying connections. You can also relate this to the
translator example we discussed earlier.
8
DailyDoseofDS.com
However, the terminology is tailored to the AI context. There are three main
roles to understand: the Host, the Client, and the Server.
Host
The Host is the user-facing AI application, the environment where the AI model
lives and interacts with the user.
Host is the one that initiates connections to the available MCP servers when the
system needs them. It captures the user's input, keeps the conversation history,
and displays the model’s replies.
9
DailyDoseofDS.com
Client
The MCP Client is a component within the Host that handles the low-level
communication with an MCP Server.
Think of the Client as the adapter or messenger. While the Host decides what to
do, the Client knows how to speak MCP to actually carry out those instructions
with the server.
10
DailyDoseofDS.com
Server
The MCP Server is the external program or service that actually provides the
capabilities (tools, data, etc.) to the application.
Servers can run locally on the same machine as the Host or remotely on some
cloud service since MCP is designed to support both scenarios seamlessly. The
key is that the Server advertises what it can do in a standard format (so the client
can query and understand available tools) and will execute requests coming from
the client, then return results.
11
DailyDoseofDS.com
Tools
Tools are what they sound like: functions that do something on behalf of the AI
model. These are typically operations that can have effects or require
computation beyond the AI’s own capabilities.
Importantly, Tools are usually triggered by the AI model’s choice, which means
the LLM (via the host) decides to call a tool when it determines it needs that
functionality.
Suppose we have a simple tool for weather. In an MCP server’s code, it might
look like:
12
DailyDoseofDS.com
This Python function, registered with @mcp.tool(), can be invoked by the AI via
MCP.
When the AI calls tools/call with name "get_weather" and {"location": "San
Francisco"} as arguments, the server will execute get_weather("San Francisco")
and return the dictionary result.
The client will get that JSON result and make it available to the AI. Notice the
tool returns structured data (temperature, conditions), and the AI can then use or
verbalize (generate a response) that info.
Since tools can do things like file I/O or network calls, an MCP implementation
often requires that the user permit a tool call.
13
DailyDoseofDS.com
For example, Claude’s client might pop up “The AI wants to use the ‘get_weather’
tool, allow yes/no?” the first time, to avoid abuse. This ensures the human stays in
control of powerful actions.
Tools are analogous to “functions” in classic function calling, but under MCP,
they are used in a more flexible, dynamic context. They are model-controlled but
developer/governance-approved in execution.
Resources
Resources provide read-only data to the AI model.
These are like databases or knowledge bases that the AI can query to get
information, but not modify.
Unlike tools, resources typically do not involve heavy computation or side effects,
since they are often just information lookup.
Another key difference is that resources are usually accessed under the host
application’s control (not spontaneously by the model). In practice, this might
mean the Host knows when to fetch a certain context for the model.
14
DailyDoseofDS.com
For instance, if a user says, “Use the company handbook to answer my question,”
the Host might call a resource that retrieves relevant handbook sections and
feeds them to the model.
Resources could include a local file’s contents, a snippet from a knowledge base
or documentation, a database query result (read-only), or any static data like
configuration info.
The AI (or Host) could ask the server for resources.get with a URI like
file://home/user/notes.txt, and the server would
callread_file("/home/user/notes.txt") and return the text.
Notice that resources are usually identified by some identifier (like a URI or
name) rather than being free-form functions.
15
DailyDoseofDS.com
They are also often application-controlled, meaning the app decides when to
retrieve them (to avoid the model just reading everything arbitrarily).
From a safety standpoint, since resources are read-only, they are less dangerous,
but still, one must consider privacy and permissions (the AI shouldn’t read files
it’s not supposed to).
The Host can regulate which resource URIs it allows the AI to access, or the
server might restrict access to certain data.
In summary, Resources give the AI knowledge without handing over the keys to
change anything.
They’re the MCP equivalent of giving the model reference material when needed,
which acts like a smarter, on-demand retrieval system integrated through the
protocol.
Prompts
Prompts in the MCP context are a special concept: they are predefined prompt
templates or conversation flows that can be injected to guide the AI’s behavior.
Think of recurring patterns: e.g., a prompt that sets up the system role as “You
are a code reviewer,” and the user’s code is inserted for analysis.
Rather than hardcoding that in the host application, the MCP server can supply
it.
16
DailyDoseofDS.com
The model doesn’t spontaneously decide to use prompts the way it does tools.
Rather, the prompt sets the stage before the model starts generating. In that
sense, prompts are often fetched at the beginning of an interaction or when the
user chooses a specific “mode”.
Suppose we have a prompt template for code review. The MCP server might have:
This prompt function returns a list of message objects (in OpenAI format) that
set up a code review scenario.
When the host invokes this prompt, it gets those messages and can insert the
actual code to be reviewed into the user content.
Then it provides these messages to the model before the model’s own answer.
Essentially, the server is helping to structure the conversation.
While we have personally not seen much applicability of this yet, common use
cases for prompt capabilities include things like “brainstorming guide,”
“step-by-step problem solver template,” or domain-specific system roles.
17
DailyDoseofDS.com
By having them on the server, they can be updated or improved without changing
the client app, and different servers can offer different specialized prompts.
An important point to note here is that prompts, as a capability, blur the line
between data and instructions.
In a way, MCP prompts are similar to how ChatGPT plugins can suggest how to
format a query, but here it’s standardized and discoverable via the protocol.
18
DailyDoseofDS.com
MCP Projects
19
DailyDoseofDS.com
Tech stack:
Workflow:
20
DailyDoseofDS.com
For this demo, we've built a simple SQLite server with two tools:
● add data
● fetch data
This is done to keep things simple, but the client we're building can connect to
any MCP server out there.
We'll use a locally served Deepseek-R1 via Ollama as the LLM for our
MCP-powered agent.
21
DailyDoseofDS.com
We define our agent’s guiding instructions to use tools before answering user
queries.
We define a function that builds a typical LlamaIndex agent with its appropriate
arguments.
The tools passed to the agent are MCP tools, which llama_index wraps as native
tools that can be easily used by our FunctionAgent.
22
DailyDoseofDS.com
We pass user messages to our FunctionAgent with a shared Context for memory,
stream tool calls and return its reply. We manage all the chat history and tool
calls here.
Launch the MCP client, load its tools, and wrap them as native tools for
function-calling agents in LlamaIndex. Then, pass these tools to the agents and
add the context manager.
23
DailyDoseofDS.com
Finally, we start interacting with our agent and get access to the tools from our
SQLite MCP server.
24
DailyDoseofDS.com
Tech stack:
Workflow:
25
DailyDoseofDS.com
First, we define an MCP server with the host URL and port.
Below, we have an MCP tool to query a vector DB. It stores ML-related FAQs.
26
DailyDoseofDS.com
If query is unrelated to ML, we resort to web search using Bright Data's SERP
API to scrape data at scale across several sources to get relevant context.
Go to Settings → MCP → Add new global MCP server. In the JSON file, add
what's shown below
27
DailyDoseofDS.com
Done!
Your local MCP server is live and connected to Cursor. It has two MCP tools:
28
DailyDoseofDS.com
Tech stack:
Workflow:
29
DailyDoseofDS.com
This agent accepts a natural language query and extracts structured output using
Pydantic. This guarantees clean and structured inputs for further processing!
30
DailyDoseofDS.com
This agent writes Python code to visualize stock data using Pandas, Matplotlib,
and Yahoo Finance libraries.
This agent reviews and executes the generated Python code for stock data
visualization.
It uses the code interpreter tool by CrewAI to execute the code in a secure
sandbox environment.
31
DailyDoseofDS.com
We set up and kick off our financial analysis crew to get the result shown below!
Now, we encapsulate our financial analyst within an MCP tool and add two more
tools to enhance the user experience.
32
DailyDoseofDS.com
Go to: File → Preferences → Cursor Settings → MCP → Add new global MCP
server. In the JSON file, add what's shown below
Done! Our financial analyst MCP server is live and connected to Cursor.
33
DailyDoseofDS.com
Tech Stack
Workflow:
34
DailyDoseofDS.com
We instantiate Firecrawl to enable web searches and start our MCP server to
expose Supabase tools to our Agent.
We fetch live web search results using Firecrawl search endpoint. This gives our
agent up-to-date online information.
35
DailyDoseofDS.com
We list our Supabase tools via the MCP server and wrap each of them as LiveKit
tools for our Agent.
36
DailyDoseofDS.com
We set up our Agent with instructions on how to handle user queries. We also
give it access to the Firecrawl web search and Supabase tools defined earlier.
37
DailyDoseofDS.com
We connect to LiveKit and start our session with a greeting. Then continuously
listen and respond until the user stops.
Done!
38
DailyDoseofDS.com
39
DailyDoseofDS.com
Tech stack
Workflow
40
DailyDoseofDS.com
Install MindsDB locally using the Docker image by running the command in your
terminal.
Through this interface, you can connect to over 200 data sources and run SQL
queries against them.
Let's start building our federated query engine by connecting our data sources to
MindsDB.
We use Slack, Gmail, GitHub and Hacker News as our federated data sources.
41
DailyDoseofDS.com
After building the federated query engine, let's unify our data sources by
connecting them to MindsDB's MCP server.
Go to: File → Preferences → Cursor Settings → MCP → Add new global MCP
server. In the JSON file, add the following
42
DailyDoseofDS.com
Apart from Claude and Cursor, MindsDB MCP server also works with the new
OpenAI MCP integration.
43
DailyDoseofDS.com
Tech Stack
Workflow
44
DailyDoseofDS.com
Deploy the Graphiti MCP server using Docker Compose. This setup starts the
MCP server with Server-Sent Events (SSE) transport.
The Docker setup above includes a Neo4j container, which launches the database
as a local instance.
This configuration lets you query and visualize the knowledge graph using the
Neo4j browser preview.
45
DailyDoseofDS.com
With tools and our server ready, let's integrate it with our Cursor IDE!
Go to: File → Preferences → Cursor Settings → MCP → Add new global MCP
server. In the JSON file, add what's shown below
Done!
46
DailyDoseofDS.com
Our Graphiti MCP server is live and connected to Cursor & Claude!
Now you can chat with Claude Desktop, share facts/info, store the response in
memory, and retrieve them from Cursor, and vice versa.
This way, you can pipe Claude’s insights straight into Cursor, all via a single
MCP.
47
DailyDoseofDS.com
Tech Stack
Workflow
48
DailyDoseofDS.com
First we setup a local MCP server, using FastMCP and provide it a name
49
DailyDoseofDS.com
This tool is used to ingest new documents into the knowledge base. User just
needs to provide a path to the document to be ingested:
50
DailyDoseofDS.com
Inside you Cursor IDE follow this: Cursor → Settings → Cursor Settings → MCP
Then add and start your server like this:
51
DailyDoseofDS.com
Tech Stack
Workflow
52
DailyDoseofDS.com
● SDV Generate
● SDV Evaluate
● SDV Visualise
We have kept the actual implementation of these tools using the SDV SDK in a
separate file, tools[.]py, that is imported here.
53
DailyDoseofDS.com
This tool creates synthetic data from real data using the SDV Synthesizer.
This tool evaluates the quality of synthetic data in comparison to real data.
We will assess statistical similarity to determine which real data patterns are
captured by the synthetic data.
54
DailyDoseofDS.com
This tool generates a visualization to compare real and synthetic data for a
specific column.
Use this function to visualize a real column alongside its corresponding synthetic
column.
55
DailyDoseofDS.com
With tools and server ready, lets integrate it with our Cursor IDE! Go to: File →
Preferences → Cursor Settings → MCP → Add new global MCP server. In the
JSON file, add what's shown below
Done! Your synthetic data generator MCP server is live and connected to Cursor.
56
DailyDoseofDS.com
57
DailyDoseofDS.com
Tech Stack
Workflow
58
DailyDoseofDS.com
We'll use Linkup platform's powerful search capabilities, which rival Perplexity
and OpenAI, to power our web search agent. This is done by defining a custom
tool that our agent can use.
59
DailyDoseofDS.com
The web search agent gathers up-to-date information from the internet based on
user query. The linkup tool we defined earlier is used by this agent.
This agent transforms raw web search results into structured insights, with
source URLs. It can also delegate tasks back to the web search agent for
verification and fact-checking.
60
DailyDoseofDS.com
It takes the analyzed and verified results from the analyst agent and drafts a
coherent response with citations for the end user.
Finally, once we have all the agents and tools defined we set up and kickoff our
deep researcher crew.
61
DailyDoseofDS.com
Now, we'll encapsulate our deep research team within an MCP tool. With just a
few lines of code, our MCP server will be ready.
Go to: File → Preferences → Cursor Settings → MCP → Add new global MCP
server
62
DailyDoseofDS.com
Done! Your deep research MCP server is live and connected to Cursor.
63
DailyDoseofDS.com
Tech Stack
Workflow
64
DailyDoseofDS.com
We also specify the audio-video mode to load both audio and video channels
during ingestion.
We retrieve the relevant chunks from the video based on the user query.
65
DailyDoseofDS.com
Each chunk has a start time, an end time, and a few more details that correspond
to the video segment.
66
DailyDoseofDS.com
To integrate the MCP server with Cursor, go to Settings → MCP → Add new
global MCP server.
67
DailyDoseofDS.com
Done!
68
DailyDoseofDS.com
69
DailyDoseofDS.com
Tech stack
Workflow
70
DailyDoseofDS.com
This tool accepts an audio input from the user and transcribes it using
AssemblyAI. We also store the full transcript to use in the next tool.
Next, we have a tool that returns specific insights from the transcript, like
speaker labels, sentiment, topics, and summary.
71
DailyDoseofDS.com
Now, we’ll set up an MCP server to use the tools we created above.
Go to File → Settings → Developer → Edit Config and add the following code.
72
DailyDoseofDS.com
Once the server is configured, Claude Desktop will show the two tools we built
above in the tools menu:
● transcribe_audio
● get_audio_data
For accessibility, we have created a Streamlit UI for the audio analysis app.
You can upload the audio, extract insights, and chat with it using AssemblyAI’s
LeMUR. Find the code below.
73