Generated using AI. Be aware that everything might not be accurate.



Chapter 6: Transports — stdio and HTTP/SSE


A transport is the mechanism by which a client and server exchange MCP messages. The protocol is transport-agnostic — the same JSON-RPC messages flow over any transport. Your choice of transport affects how the server is deployed and how clients connect to it.

MCP defines two standard transports: stdio and HTTP with Server-Sent Events (SSE).


stdio Transport

How It Works

With stdio transport, the host process spawns your server as a child process. Messages flow over stdin/stdout:

  • Client → Server: JSON-RPC messages written to the server’s stdin
  • Server → Client: JSON-RPC messages written to the server’s stdout
  • Server logs: written to stderr (never mixed with the protocol stream)

The server lives and dies with the connection. When the host terminates the process, the session ends.

When to Use stdio

Use stdio when:

  • The server runs locally on the same machine as the client
  • You want simple deployment (no network, no ports, no firewall rules)
  • The server is only used by one client at a time

stdio is the default for all local MCP servers. It is the transport you should start with.

Running With stdio

mcp.run() defaults to stdio:

if __name__ == "__main__":
    mcp.run()          # stdio transport, default
    # or equivalently:
    mcp.run(transport="stdio")

Important: Keep stdout Clean

With stdio transport, any output written to stdout (other than valid JSON-RPC) will corrupt the protocol stream and crash the connection. This means:

  • Use logging to stderr, not print statements
  • Never use print() in your tool code (use return instead)
  • Redirect any library output that goes to stdout

Configure logging to stderr:

import logging
import sys

logging.basicConfig(stream=sys.stderr, level=logging.INFO)
logger = logging.getLogger(__name__)

HTTP/SSE Transport

How It Works

With HTTP/SSE transport, your server runs as a persistent HTTP service. The client connects over the network:

  • Client → Server: HTTP POST requests (tool calls, resource reads, etc.)
  • Server → Client: Server-Sent Events stream (notifications, streaming responses)

This is a long-lived HTTP connection. The server runs independently of any client and can serve multiple clients simultaneously.

When to Use HTTP/SSE

Use HTTP/SSE when:

  • The server needs to be shared across multiple users or machines
  • The server runs on a remote host (a VM, a container, a cloud service)
  • You need to serve multiple clients simultaneously
  • You want the server to keep state between client sessions (e.g., connection pools, caches)

Running With HTTP/SSE

if __name__ == "__main__":
    mcp.run(transport="sse")

By default this starts on http://localhost:8000/sse. Configure the host and port:

mcp.run(transport="sse", host="0.0.0.0", port=9000)

Full example: code/05_http_sse_server.py

Connecting Clients to an HTTP/SSE Server

In client configuration, instead of a command to run, you provide the server URL. See Chapters 7–9 for client-specific configuration details.


Choosing a Transport

Factor stdio HTTP/SSE
Deployment Local process Network service
Multi-client No (1:1) Yes
Setup complexity Minimal Requires running server
Security Process isolation Network security needed
Latency Minimal Network overhead
Best for Local tools, dev Shared services, remote

Start with stdio. It is simpler to develop, test, and configure. Move to HTTP/SSE only when you need remote access or multi-client support.


Streamable HTTP (Newer Alternative)

The MCP specification also defines a newer “Streamable HTTP” transport that unifies request and response over a single HTTP endpoint (without requiring SSE). As of early 2026, the Python SDK supports this as transport="streamable-http".

Streamable HTTP is better suited for serverless environments (AWS Lambda, Cloudflare Workers) where long-lived SSE connections are impractical.

mcp.run(transport="streamable-http", host="0.0.0.0", port=8000)

Check the SDK documentation for the latest transport options.


Running Behind a Reverse Proxy

For production HTTP/SSE servers, run behind a reverse proxy like nginx or Caddy:

  • Terminate TLS at the proxy
  • Forward traffic to your server on localhost
  • Set appropriate headers (X-Forwarded-For, etc.)

Example nginx location block:

location /mcp/ {
    proxy_pass http://localhost:8000/;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
    proxy_buffering off;
    proxy_cache off;
}

The proxy_buffering off setting is critical for SSE — buffering breaks the streaming connection.


Key Takeaways

  • stdio: local process, simple, default for most servers
  • HTTP/SSE: network service, supports multiple clients, needed for remote deployment
  • With stdio, never write to stdout except through the MCP protocol (use stderr for logs)
  • Streamable HTTP is a newer alternative suitable for serverless deployments
  • Start with stdio; switch to HTTP/SSE when you need network access or multi-client support

← Chapter 5: Prompt Templates Table of Contents Chapter 7: Connecting to Claude Desktop →


>> You can subscribe to my mailing list here for a monthly update. <<