Before writing a single line of code, it helps to understand what MCP actually is at the protocol level. This chapter covers the three roles in an MCP system, the three primitive types servers can expose, and how messages flow between components.
Every MCP interaction involves three components:
The Host is the application the user interacts with. Claude Desktop is a host. So is Claude Code, Cursor, and Windsurf. The host is responsible for:
The host owns the overall experience. It decides which MCP servers are available and presents their capabilities to the AI model.
The Client lives inside the Host. It is the component that speaks the MCP protocol on the host’s behalf. Each host maintains one client per server connection.
The client:
In practice, when you configure Claude Desktop with an MCP server, Claude Desktop spawns a client connection to that server automatically. You rarely interact with the client directly.
The Server is what you build. It is a process that:
Your server can be a local Python script, a remote HTTP service, or anything in between. It is isolated from the AI model — it communicates only with the client, never directly with the LLM.
Here is what happens when an AI assistant calls a tool on your server:
User ──► Host (e.g. Claude Desktop)
│
▼
Client ──► Server (your MCP server)
│ │
│ request │
│ ──────────── ►
│ │
│ response │
│ ◄────────────
▼
AI Model ◄── Host presents result
tools/call request to your serverMCP messages are JSON-RPC 2.0. Every message is a JSON object with:
jsonrpc: always "2.0"id: unique request identifier (for request/response pairs)method: the operation name (e.g. tools/list, tools/call)params: method-specific parametersA tool call looks like this:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": { "city": "Paris" }
}
}
And the server responds:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [
{ "type": "text", "text": "Paris: 18°C, partly cloudy" }
]
}
}
You typically never write these JSON messages by hand — the Python SDK handles serialization for you.
MCP defines three types of capabilities a server can expose:
Tools are functions the AI can call. They take structured input and return structured output. Examples:
search_database(query: str) → list[dict]send_email(to: str, subject: str, body: str) → strrun_python(code: str) → strTools are the most commonly used primitive. They are how the AI takes action in the world.
Resources are data the AI can read. They are identified by URIs and can be static or dynamic. Examples:
file:///home/user/notes.txt — a file’s contentsdb://customers/42 — a database recordconfig://app/settings — application configurationResources are analogous to GET endpoints in a REST API. They return data without side effects.
Prompts are reusable message templates. They accept arguments and return a sequence of messages the AI can use as context or instructions. Examples:
Prompts are optional but useful for standardizing common workflows.
When a client connects to a server, they perform a handshake:
initialize with its protocol version and capabilitiesinitialized to confirmAfter initialization, the client can call tools/list, resources/list, and prompts/list to discover exactly what the server offers. The AI model uses this information to decide which tools are available in a given conversation.
A typical server lifecycle:
shutdown notification)For stdio servers, startup and shutdown are tied to the process lifecycle. For HTTP servers, the connection is managed separately from the process.
| ← Introduction | Table of Contents | Chapter 2: Your First MCP Server → |