The MCP Abstraction Tax: Why Every Protocol Layer Costs You Fidelity

James Phoenix
James Phoenix

Every layer between an agent’s intent and an API loses expressiveness. MCP adds a layer. Understanding what that layer costs you matters more than picking a winner.

Source: Justin Poehnelt | Date: March 2026


Core Thesis

MCP is an abstraction over an abstraction. The REST API is already an imperfect projection of the underlying data model. MCP adds another lossy layer on top of that. For simple APIs, the tax is negligible. For complex enterprise APIs with polymorphic custom fields, deeply nested structures, and opaque relationship identifiers, the fidelity loss compounds in ways that break agent workflows.

The key insight: humans need simplified abstractions to manage cognitive load. LLMs don’t. An agent can navigate a complex CLI via --help and call precise APIs in seconds. When we design MCP servers the same way we design human-facing abstractions, we’re optimizing for the wrong consumer.


The Abstraction Stack

Between an agent’s intent and the actual data sits a layered structure:

Agent Intent: "Update the probability on the ACME corp deal"
     |
     v
+-----------+
| MCP Tool  | <-- abstraction tax
| Definition|
+-----+-----+
      |
+-----v-----+
| REST API  | <-- abstraction tax
|  (CRM)    |
+-----+-----+
      |
+-----v-----+
|   Data    | <-- the actual thing
| (Storage) |
+-----------+

Each layer loses something. The question at each layer is whether what you gain (discoverability, safety, standardization) is worth what you lose.


The Two-Path Problem

Building an MCP server for an enterprise API forces a choice between two bad options:

Path 1: Constrained Tools

Expose a handful of high-level operations: create_account, update_opportunity, add_contact. These are easy for models to call, fit in a context window, and look clean. But they can’t express complex operations.

“You can’t express ‘update the stage on 50 opportunities, recalculate their custom revenue formulas, and reassign the related tasks to the new account owner’ through update_opportunity.”

Path 2: Full Surface

Expose every API method as a tool with its full request schema. This preserves fidelity but explodes the context window. A full-featured enterprise CLI covers dozens of services with hundreds of commands. Loading all those tool definitions at once consumes a meaningful fraction of an agent’s reasoning capacity, and most are irrelevant to any given task.

Path Fidelity Context Cost Practical?
Constrained tools Low Low Yes, but limited
Full surface High Extreme No

Neither path works well. Every team building an MCP server for a large API surface hits this same wall.


Why Enterprise APIs Are Hostile

This isn’t just a protocol problem. Enterprise CRM APIs were designed for human developers who read docs, understand the data model, and carefully construct requests. They have sharp edges:

  • Opaque relationship identifiers
  • Polymorphic custom fields
  • Deeply nested JSON structures
  • Missing capabilities for operations that feel basic

No amount of MCP abstraction fixes a fundamentally hostile API. The abstraction tax compounds: an unfriendly API wrapped in a lossy protocol is doubly frustrating for agents.

Udemy Bestseller

Learn Prompt Engineering

My O'Reilly book adapted for hands-on learning. Build production-ready prompts with practical exercises.

4.5/5 rating
306,000+ learners
View Course

Skills: On-Demand Context Loading

The insight that drives a more sophisticated approach: you don’t need to load everything at once.

A CLI with 700+ commands doesn’t present all 700 in the system prompt. The agent starts with --help, discovers the service, runs crm schema opportunities.bulkUpdate, and gets exactly the schema it needs at runtime, on demand, paid for only when relevant.

Skills extend this further. Each SKILL.md file is a self-contained unit of agent knowledge:

skills/
  crm-opportunities/SKILL.md           # Core opportunity operations
  crm-opportunities-advanced/SKILL.md  # Custom fields, bulk updates

The agent loads crm-opportunities when it needs to manage opportunities. It loads crm-opportunities-advanced only when bulk updates are required. Context cost scales with the task, not with the API surface.

MCP lacks this natively. Every tool definition is loaded upfront. Some clients support enabling and disabling tools, and some are exploring tool search, but these are client-side features, not protocol guarantees. If you’re building an MCP server, you can’t assume the client will be smart about context management.


Dynamic Discovery as a Workaround

A more sophisticated MCP approach: expose a meta-tool like discover_tools or enable_service that lets the agent dynamically expand its available tool set as the conversation evolves.

Agent: "I need to work with CRM opportunities"
  -> calls discover_tools(service: "opportunities")
  -> server registers opportunity tools
  -> agent now has opportunity capabilities

FastMCP 3.1 shipped a two-stage discovery pattern using Search and GetSchemas meta-tools for this exact problem. It trades one upfront context cost (all tools loaded at startup) for a small per-request cost (the discover call) plus targeted context (only the tools that matter).

This highlights a growing tension. As MCP clients get smarter about native tool search and selective loading, baking stateful discovery logic into the server may eventually conflict with the client. It’s a delicate balance between helping the agent now and fighting the client later.


The Fidelity Spectrum

Different approaches occupy different points on the fidelity-vs-accessibility curve:

Approach Accessibility Fidelity Trade-off
MCP (constrained) High Low Can only express what the tool author anticipated
MCP (full surface) Low High in theory Context cost makes it impractical
CLI + Skills Moderate High Requires a CLI designed for this pattern
Raw API + client libs Low Maximum Lowest guardrails

These aren’t competing approaches. They’re different points on the same curve, optimizing for different constraints.


Practical Implications

The MCP layer is only as good as the API underneath. If the API is hostile to AI agents, no abstraction fixes that. Start by making the API agent-friendly.

MCP interfaces can evolve faster than APIs. They don’t carry the same stability guarantees. This is an advantage. Ship, learn what agents struggle with, iterate. The fidelity loss is tolerable if iteration speed is high.

Test with agents, not humans. Agent failure modes differ from human failure modes. Define a user journey, hand it to an agent with your MCP server, and watch where it breaks. The friction points are often surprising.

Context management is a shared responsibility. MCP clients are getting smarter with tool search, selective loading, and dynamic registration. Building an overly constrained server to solve a context problem the client already handles can leave users frustrated. But assuming the client will be smart about it is also risky.


Connection to Agent-Native Architecture

Poehnelt’s fidelity spectrum maps directly to the granularity principle: tools should be atomic primitives, and features should be outcomes achieved by agents operating in a loop. The constrained MCP path violates this by bundling decision logic into tools. The CLI + Skills approach preserves atomic granularity while managing context costs.

The tools and eyes framing also applies here. CLI + --help gives agents “eyes” to discover capabilities dynamically. Constrained MCP servers give agents “tools” but take away their ability to see what else is possible.


Key Takeaway

MCP and CLIs optimize for different things. MCP optimizes for discoverability and standardization. CLIs with skills optimize for fidelity and flexibility. The abstraction tax won’t disappear. Understanding where you’re paying it, and what you’re getting in return, is the difference between a tool that serves agents well and one that just looks like it does.

“The interesting question isn’t which one is ‘best’. It’s understanding what you lose at each point and whether that loss matters for your use case.” — Justin Poehnelt

Topics
Abstraction LayersAgent IntentApi FidelityLlm OptimizationProtocol Design

More Insights

Cover Image for Monitor Generation from Diffs: Self-Maintaining Production Systems

Monitor Generation from Diffs: Self-Maintaining Production Systems

When code changes, the observability surface should change with it. Instead of hand-writing monitors, an agent reads the PR diff on merge and generates monitors that instrument the new code. When a mo

James Phoenix
James Phoenix
Cover Image for Conversational Code Review: Agent-Assisted Understanding of Large Diffs

Conversational Code Review: Agent-Assisted Understanding of Large Diffs

Instead of staring at mega-diff walls, have an agent read the ticket and the diff, then hold a conversation with it about intent, impact, and risk. The agent surfaces things impacted by the change but

James Phoenix
James Phoenix