LLM·Dex
MCPAgentsTools

MCP Servers Actually Changed Things

Six months in, the Model Context Protocol is what every editor agent uses. Here's what's working, what isn't, and how to write your own.

By LLMDex Editorial

The Model Context Protocol (MCP) launched in late 2024 as Anthropic's proposal for a standard way to connect LLMs to external tools and data. By mid-2026 it's the de facto standard across coding agents (Cursor, Cline, Claude Code), Claude Desktop, and a growing fraction of the broader agent ecosystem. This article is a working assessment six months in.

What MCP solves

Before MCP, every coding agent rolled its own way of giving the model access to filesystem, terminal, search, and external data. Cursor had its file tools, Cline had its own, Aider used yet another shape. If you wanted to build a custom integration ("let the agent query our internal API"), you wrote it once for each agent. Total integration cost scaled linearly with the number of agent-tool combinations.

MCP standardizes the wire protocol. An MCP server exposes tools and resources via a defined JSON-RPC schema. Any MCP-compatible client (which is most major coding agents now) can connect. Write the integration once; every agent gets it.

This is a small idea with a big practical impact. The number of tool integrations available to a typical Cline or Cursor user has grown roughly 10x since MCP became standard.

What's actually working

Three categories of MCP servers that are genuinely useful in 2026:

1. Repository / codebase access

The flagship MCP server use case. mcp-server-filesystem, mcp-server-git, mcp-server-github give the agent structured access to your code. Combined with reasoning models, the result is agents that can find files, read context, propose edits, and commit, all through a standard API surface.

We use this for everything internal. Cline + a custom MCP server for our private docs is the most-used internal tool we've shipped.

2. External data sources

mcp-server-postgres, mcp-server-sqlite, and various API-wrapper MCP servers give agents read/write access to databases and external services. The agent can query, summarize, take actions, all gated by the MCP server's authorization layer.

For internal-tooling agents (data analysts, customer-support helpers, ops dashboards), this is the killer feature. Permissions live in the MCP server, not in the prompt.

3. Browser and web

mcp-server-puppeteer and similar browser-automation servers give agents the ability to navigate the web. Less mature than the filesystem use cases but improving fast. Use cases: research agents that fetch URLs, testing agents that interact with rendered apps, scraping pipelines.

What isn't working

Three rough edges six months in:

1. Authentication is still messy

MCP itself doesn't define auth. Each server implements its own. For a public MCP server (e.g., a hosted GitHub integration), you end up with OAuth flows wrapped around the protocol. The authorization story is the most-cited friction in production deployments.

2. Discovery is uneven

There's no canonical registry of MCP servers. Cursor, Cline, and Claude Desktop each ship their own list. Finding "the right MCP server for X" still requires Googling. A central registry would help; the ecosystem hasn't built one yet.

3. Long-running operations

MCP's request-response model handles fast operations well. Long operations (multi-minute training runs, large file uploads) require ad-hoc patterns. The protocol could use a standard for streaming progress; v0.4 of the spec is rumored to include this.

How to write your own

If you want a working MCP server, the path is short. The reference SDK is @modelcontextprotocol/sdk for TypeScript and a Python equivalent. The minimal server:

import { Server } from "@modelcontextprotocol/sdk/server";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio";

const server = new Server(
  { name: "my-tool", version: "0.1.0" },
  { capabilities: { tools: {} } },
);

server.setRequestHandler("tools/list", async () => ({
  tools: [
    {
      name: "echo",
      description: "Echo a message back",
      inputSchema: {
        type: "object",
        properties: { message: { type: "string" } },
        required: ["message"],
      },
    },
  ],
}));

server.setRequestHandler("tools/call", async (req) => ({
  content: [{ type: "text", text: req.params.arguments.message }],
}));

await server.connect(new StdioServerTransport());

Configure your client (Cline, Claude Desktop, etc.) to launch this server, and the echo tool becomes available to the agent. The plumbing is small; the value is the standard.

Best practices we've learned

Three patterns that work in production MCP servers:

1. Narrow tool surfaces

A tool with one clear purpose ("get_user_by_id") is more reliable than a tool with many parameters ("query_users with filters"). Models call narrow tools more correctly.

2. Strict input schemas

Use JSON Schema with required fields, enums, and bounds. Modern frontier models follow strict schemas closely; sloppy schemas produce sloppy calls.

3. Side-effect logging

Every tool call should log to your observability stack with the inputs, outputs, and the calling agent. MCP doesn't define this, implement it server-side. Production debugging without it is painful.

Where it goes from here

Three trends to watch:

1. Server marketplaces

A central registry / marketplace for MCP servers is overdue. The first organization that builds it and gets endorsement from major clients (Anthropic, Cursor, OpenAI) wins. There are several attempts in early stages.

2. Hosted MCP servers

Today most MCP servers are local processes you launch. Hosted MCP servers (e.g., a hosted GitHub MCP server you connect to via URL) are starting to appear. The auth story has to mature first.

3. Cross-vendor adoption

OpenAI hasn't adopted MCP wholesale (they have their own tool-use surface), but the gap is narrowing. By end-2026 it's plausible that GPT-5/-5.5 will accept MCP servers natively. If that happens, MCP becomes the universal standard.

What this means for buyers

Three implications:

  1. Pick coding agents that support MCP natively. Cursor, Cline, Claude Code, Aider all do. This is the growing standard; new tooling will assume it.
  2. For custom integrations, write an MCP server. Don't write agent-specific glue. The portability win is real.
  3. Watch the spec. MCP is on a 6-month version cycle. Major versions add capabilities; minor versions fix issues. Track them.

What this means for builders

If you're shipping AI tooling, MCP support is now table stakes. Customers expect their existing MCP servers to work in your product. Building proprietary tool-use surfaces is technical debt; build on the standard.

If you're shipping infrastructure (databases, APIs, internal tools) and you want LLM agents to interact with you, ship an MCP server alongside your normal API. The cost is small; the AI integration future is more reachable.

The deeper takeaway

MCP is a small, focused protocol that solved a real coordination problem. It's not the most ambitious agent infrastructure proposal, it's not trying to be a full agent framework, but the focused scope is its strength. It does one thing (standardize tool exposure) and it does it well.

In 2026, MCP is one of the few examples in the AI tooling space of a standard that actually got adopted. That makes it worth paying attention to even beyond its immediate utility.

Further reading

Keep reading

Friday digest

Intelligence, distilled weekly.

One short email every Friday, new model launches, leaderboard moves, and pricing drops. Curated by hand. Free, no spam.