Model Context Protocol (MCP) is an open protocol introduced by Anthropic in late 2024 that standardizes how LLM applications connect to external tools, data sources, and services. Instead of every AI app building custom integrations for every tool, MCP defines a single client-server interface: AI apps (clients) connect to MCP servers that expose tools, resources, and prompts. MCP has been rapidly adopted across the AI ecosystem as the de facto integration standard.
The problem MCP solves
Before MCP, every AI application had to write custom code to connect to every external service — a search API here, a database connector there, a GitHub integration elsewhere. With N AI apps and M tools, you need N×M custom integrations. MCP collapses this to N+M: each AI app implements MCP once, each tool exposes itself via MCP once, and they all work together automatically.
| Before MCP | After MCP |
|---|---|
| Each AI app builds custom connectors for each service | Each AI app implements one MCP client |
| Tool vendors must integrate with each AI framework separately | Tool vendors publish one MCP server |
| N apps × M tools = N×M integrations to maintain | N + M implementations total |
| Breaking changes in one framework break all integrations | Standard protocol — client and server versions independently |
| No portability — configs don't transfer between AI apps | MCP configs are portable across any MCP-compatible client |
Rapid adoption
Within months of release, MCP was adopted by Cursor, Windsurf, Zed, VS Code Copilot, Continue, Replit, and hundreds of third-party tool servers covering GitHub, Slack, Notion, Postgres, Google Drive, browser automation, and more. It is now the closest thing to a universal AI integration standard.
MCP architecture: clients, servers, and primitives
MCP has a clean client-server architecture. The AI application hosts an MCP client that connects over stdio or HTTP/SSE to one or more MCP servers, each exposing three types of primitives:
| Primitive | Description | Example | Who controls it |
|---|---|---|---|
| Tools | Functions the LLM can invoke — like function calling | read_file(), search_github(), send_email() | Server exposes; model decides when to call |
| Resources | Data the LLM can read — files, DB records, live data | file://project/main.py, postgres://db/users | Server exposes; host/user decides what to expose |
| Prompts | Pre-built prompt templates with arguments | review_code(language="python"), explain_error() | Server exposes; user selects |
Minimal MCP server in Python using the official SDK — expose any function as an MCP tool
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("My First MCP Server")
# Expose a tool — any function decorated with @mcp.tool()
# is automatically registered with the correct JSON schema
@mcp.tool()
def calculate_compound_interest(
principal: float,
rate: float,
years: int,
compound_frequency: int = 12
) -> dict:
"""
Calculate compound interest.
principal: Initial amount in dollars
rate: Annual interest rate as decimal (0.05 = 5%)
years: Number of years
compound_frequency: Times compounded per year (12 = monthly)
"""
amount = principal * (1 + rate / compound_frequency) ** (compound_frequency * years)
return {
"final_amount": round(amount, 2),
"interest_earned": round(amount - principal, 2),
"effective_annual_rate": round((1 + rate / compound_frequency) ** compound_frequency - 1, 4)
}
# Run the server (connects via stdio by default — Claude Desktop, Cursor, etc. can use it)
if __name__ == "__main__":
mcp.run()Connecting to MCP servers: Claude Desktop config example
claude_desktop_config.json — add MCP servers here to give Claude Desktop access to tools
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"],
"description": "Read and write files in your projects folder"
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
},
"my-custom-server": {
"command": "python",
"args": ["/path/to/my_mcp_server.py"]
}
}
}Finding MCP servers
The official Anthropic MCP server registry lives at github.com/modelcontextprotocol/servers and includes first-party servers for filesystem, GitHub, Slack, PostgreSQL, Google Drive, browser automation, and many more. The community registry at mcp.so lists hundreds of third-party servers.
Practice questions
- What problem does MCP solve that wasn't addressed by the standard OpenAI function calling / tool use API? (Answer: Standard tool use: each AI application must implement custom integrations for each external tool — n apps × m tools = n×m integrations. MCP: universal protocol where tool servers expose a standard API (list_tools, call_tool, list_resources, read_resource). Any MCP-compatible AI client connects to any MCP server with zero custom integration. One integration per tool, works across all AI clients. Analogous to HTTP/REST making web APIs interoperable.)
- What is the difference between MCP Tools, MCP Resources, and MCP Prompts? (Answer: MCP Tools: functions the AI can call that perform actions or retrieve dynamic data (search_database, send_email, run_code). Bidirectional — AI decides when to call. MCP Resources: static or semi-static content exposed to the AI as context (file contents, database schemas, documentation). One-directional — AI reads. MCP Prompts: reusable prompt templates that users can invoke from the client (like slash commands). The server provides the prompt structure; the AI fills it in. Each serves a different part of the context and action architecture.)
- What are the security considerations specific to MCP deployments? (Answer: (1) Tool scope: each MCP server should have minimum necessary permissions — a search tool should not have filesystem write access. (2) Authentication: MCP servers for external services require OAuth or API key management — credentials must not be in AI context. (3) Injection via resources: an MCP resource server could return content with embedded prompt injection instructions. (4) Server trust: users connecting to third-party MCP servers grant those servers access to their AI conversations. Anthropic recommends only using trusted/audited MCP servers.)
- How does MCP differ from LangChain or LlamaIndex tool integrations? (Answer: LangChain/LlamaIndex tool integrations: implemented as Python code directly in the application, tightly coupled to the specific framework. Updating requires code changes and redeployment. MCP: protocol-level standard allowing runtime connection to any compatible server without code changes. An MCP-compatible client (Claude, Cursor, any Claude-compatible app) can discover and use tools from any MCP server dynamically. LangChain tools require building custom server wrappers to expose as MCP, or using LangChain's own server mode.)
- What is 'sampling' in MCP and how does it enable sophisticated agentic workflows? (Answer: MCP sampling: a server can request the AI client to run an LLM inference (a 'sampling request') as part of executing a tool. This enables recursive AI calls: an orchestrator AI uses a tool that itself calls another AI for subtask processing. Example: a document analysis MCP tool extracts text (tool action), then requests Claude to summarise each section (sampling), then compiles results. Sampling creates composable AI workflows where tools can incorporate LLM reasoning, not just deterministic code execution.)
On LumiChats
LumiChats supports MCP connections, allowing you to connect your own tools and data sources directly to your AI conversations. Configure MCP servers to give LumiChats access to your files, databases, or APIs.
Try it free