Prompts are the third MCP primitive. They allow servers to expose reusable prompt templates — pre-built message sequences that users or host applications can invoke to set up specific workflows. This chapter explains what prompts are, how to define them, and when to use them.
Prompts are templates that produce structured message sequences. Unlike tools (which the AI calls autonomously) and resources (which provide data), prompts are invoked by the user or the host application to establish context before or during a conversation.
Think of prompts as “workflows with parameters”:
When a user selects a prompt in Claude Desktop, the client calls your server to resolve the template with the provided arguments, then inserts the resulting messages into the conversation.
Use @mcp.prompt():
from mcp.server.fastmcp import FastMCP
from mcp.types import Message
mcp = FastMCP("my-server")
@mcp.prompt()
def summarize(document_uri: str, max_words: int = 200) -> str:
"""Summarize a document in a given number of words."""
return f"Please summarize the document at {document_uri} in no more than {max_words} words."
When the prompt is invoked with document_uri="docs://handbook/hr" and max_words=100, the server returns a message containing the filled-in string.
Full example: code/04_prompts_example.py
Prompts can return a full conversation setup — a system message plus one or more user messages:
from mcp.types import Message, TextContent
@mcp.prompt()
def code_review(diff: str, style: str = "pep8") -> list[Message]:
"""Set up a code review session for the given diff."""
return [
Message(
role="user",
content=TextContent(
type="text",
text=f"Please review the following code diff using {style} style guidelines:\n\n{diff}",
),
)
]
Returning a list of Message objects gives you full control over the conversation structure handed to the AI.
Prompts can include embedded resource content. Use EmbeddedResource to reference a resource by URI — the host will fetch and inline its content:
from mcp.types import Message, TextContent, EmbeddedResource, ResourceContents
@mcp.prompt()
def analyze_config(config_uri: str) -> list[Message]:
"""Analyze a configuration file for potential issues."""
return [
Message(
role="user",
content=[
TextContent(type="text", text="Please analyze this configuration for issues:"),
EmbeddedResource(
type="resource",
resource=ResourceContents(uri=config_uri),
),
],
)
]
This pattern is useful when you want to combine a fixed instruction with dynamically fetched content.
Prompts registered with @mcp.prompt() are automatically included in prompts/list responses. Give prompts clear names and descriptions:
@mcp.prompt(
name="git_commit_message",
description="Generate a conventional commit message from a diff",
)
def git_commit(diff: str, type: str = "feat") -> str:
"""Generate a commit message for the given diff."""
return f"Write a {type} commit message for this diff:\n\n{diff}"
The name and description appear in the client’s prompt picker UI.
Prompt arguments follow the same rules as tool inputs: use type hints for the schema, Pydantic for complex inputs. Arguments with default values are optional.
from pydantic import BaseModel
class TranslationRequest(BaseModel):
text: str
target_language: str
preserve_formatting: bool = True
@mcp.prompt()
def translate(request: TranslationRequest) -> str:
"""Translate text to the target language."""
fmt_note = "Preserve the original formatting." if request.preserve_formatting else ""
return f"Translate the following text to {request.target_language}. {fmt_note}\n\n{request.text}"
Prompts are most useful when:
Prompts are not necessary when:
For most servers, tools are the primary primitive. Prompts are a useful addition when you want to expose structured workflows.
@mcp.prompt() — arguments work the same as tool inputsMessage objects (full conversation setup)EmbeddedResource to inline resource content into a prompt| ← Chapter 4: Exposing Resources | Table of Contents | Chapter 6: Transports → |