Summary
Static CLAUDE.md files limit context retrieval to pre-written documentation. Build a custom MCP server to provide queryable project knowledge—dependency graphs, pattern examples, test coverage, recent changes, and runtime metrics—that AI agents can request on-demand. Enables dynamic, targeted context retrieval instead of loading all documentation upfront.
The Problem
CLAUDE.md files are static documents that must anticipate all possible context needs upfront. When an AI agent needs specific information (e.g., “What functions call this API?” or “Show me similar patterns”), static files can’t provide targeted answers—they contain everything or nothing. This forces you to either include massive context upfront (expensive, slow) or miss critical information (low quality outputs).
The Solution
Implement a custom MCP (Model Context Protocol) server that exposes project knowledge as queryable resources. AI agents can request specific context on-demand: architecture graphs, pattern examples from the codebase, test coverage metrics, git history, and runtime statistics. The MCP server dynamically generates responses based on current codebase state, providing always-up-to-date, targeted information only when needed.
The Problem: Static Documentation Limits
When working with AI coding agents, you typically provide context through static files:
- CLAUDE.md: Architecture patterns, coding standards, conventions
- README.md: Project setup, overview, usage examples
- schemas/: JSON schemas, database schemas
- docs/: API documentation, architectural decision records
But static files have fundamental limitations:
Limitation 1: Can’t Answer Specific Queries
AI Agent needs: “Show me all functions that call the authenticate() method”
Static CLAUDE.md: Contains general patterns but can’t dynamically traverse dependency graph
Result: AI agent must read entire codebase or make incorrect assumptions
Limitation 2: Information Becomes Stale
CLAUDE.md written: January 2025
Current date: November 2025
Codebase changes: 2,500 commits, 150 new files, 3 major refactors
Problem: Static documentation doesn’t reflect current reality
Limitation 3: All or Nothing Context
You must choose:
Option A: Include Everything
Load CLAUDE.md (5K tokens) +
All schemas (3K tokens) +
All examples (4K tokens) +
All patterns (3K tokens) = 15K tokens
Cost: High
Speed: Slow
Relevance: Mixed (90% unused)
Option B: Include Minimal Context
Load only root CLAUDE.md (2K tokens)
Cost: Low
Speed: Fast
Relevance: Often insufficient
Neither is optimal. What you really want: targeted context on-demand.
Limitation 4: Can’t Provide Real-Time Data
Static files can’t show:
- Recent changes: “What was modified this week?”
- Test coverage: “Which modules have low coverage?”
- Performance metrics: “Which API endpoints are slowest?”
- Dependency graph: “What depends on this module?”
- Live examples: “Show me current usage patterns”
The Solution: MCP Server for Dynamic Context
MCP (Model Context Protocol) is a standard protocol for connecting AI models to external data sources and tools. Instead of static files, you build a custom MCP server that AI agents can query for specific project knowledge.
How MCP Works
┌─────────────────────────────────────────────────────────┐
│ AI Agent (Claude) │
│ │
│ Needs: "Show dependency graph for auth module" │
└─────────────────────┬───────────────────────────────────┘
│
│ MCP Protocol (query)
│
┌─────────────────────▼───────────────────────────────────┐
│ Custom MCP Server │
│ │
│ Resources: │
│ - architecture-graph://auth │
│ - pattern-examples://factory-functions │
│ - recent-changes://last-week │
│ - test-coverage://modules │
│ - performance-metrics://api-endpoints │
└─────────────────────┬───────────────────────────────────┘
│
│ Queries codebase
│
┌─────────────────────▼───────────────────────────────────┐
│ Project Codebase │
│ │
│ - Source files │
│ - Git history │
│ - Test results │
│ - Runtime metrics │
└─────────────────────────────────────────────────────────┘
Core Concept: Resources as Queries
Instead of loading all documentation, AI agents request specific resources:
// Static approach (old)
const context = await readFile('CLAUDE.md'); // Everything, always
// Dynamic approach (MCP)
const graph = await mcp.read('architecture-graph://auth');
const examples = await mcp.read('pattern-examples://factory-functions');
const changes = await mcp.read('recent-changes://last-week');
// Only what's needed, when needed
Implementation
Step 1: Define MCP Resources
Identify queryable knowledge in your project:
{
"resources": [
{
"uri": "architecture-graph://project",
"name": "Project Architecture Graph",
"description": "Returns dependency graph of all modules",
"mimeType": "application/json"
},
{
"uri": "architecture-graph://{module}",
"name": "Module Architecture Graph",
"description": "Returns dependency graph for specific module",
"mimeType": "application/json"
},
{
"uri": "pattern-examples://{pattern}",
"name": "Pattern Examples",
"description": "Returns real code examples of specified pattern",
"mimeType": "application/json"
},
{
"uri": "recent-changes://last-{period}",
"name": "Recent Changes",
"description": "Returns git history summary for specified period",
"mimeType": "application/json"
},
{
"uri": "test-coverage://{module}",
"name": "Test Coverage",
"description": "Returns test coverage metrics for module",
"mimeType": "application/json"
},
{
"uri": "performance-metrics://{endpoint}",
"name": "Performance Metrics",
"description": "Returns runtime performance statistics",
"mimeType": "application/json"
},
{
"uri": "code-search://{query}",
"name": "Code Search",
"description": "Searches codebase using AST-aware search",
"mimeType": "application/json"
}
]
}
Step 2: Build the MCP Server
Create a TypeScript MCP server:
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { ListResourcesRequestSchema, ReadResourceRequestSchema } from '@modelcontextprotocol/sdk/types.js';
import { analyzeArchitecture } from './analyzers/architecture.js';
import { findPatternExamples } from './analyzers/patterns.js';
import { getGitHistory } from './analyzers/git.js';
import { getCoverageMetrics } from './analyzers/coverage.js';
import { getPerformanceMetrics } from './analyzers/performance.js';
const server = new Server(
{
name: 'project-context-server',
version: '1.0.0',
},
{
capabilities: {
resources: {},
},
}
);
// List available resources
server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [
{
uri: 'architecture-graph://project',
name: 'Project Architecture Graph',
description: 'Complete dependency graph of all modules',
mimeType: 'application/json',
},
{
uri: 'pattern-examples://factory-functions',
name: 'Factory Function Examples',
description: 'Real examples from codebase',
mimeType: 'application/json',
},
{
uri: 'recent-changes://last-week',
name: 'Last Week Changes',
description: 'Git commits from last 7 days',
mimeType: 'application/json',
},
{
uri: 'test-coverage://all',
name: 'Test Coverage Report',
description: 'Coverage metrics for all modules',
mimeType: 'application/json',
},
{
uri: 'performance-metrics://api',
name: 'API Performance Metrics',
description: 'Response times and error rates',
mimeType: 'application/json',
},
],
};
});
// Read specific resource
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const uri = request.params.uri;
// Architecture graph
if (uri.startsWith('architecture-graph://')) {
const target = uri.replace('architecture-graph://', '');
const graph = await analyzeArchitecture(target);
return {
contents: [
{
uri,
mimeType: 'application/json',
text: JSON.stringify(graph, null, 2),
},
],
};
}
// Pattern examples
if (uri.startsWith('pattern-examples://')) {
const pattern = uri.replace('pattern-examples://', '');
const examples = await findPatternExamples(pattern);
return {
contents: [
{
uri,
mimeType: 'application/json',
text: JSON.stringify(examples, null, 2),
},
],
};
}
// Recent changes
if (uri.startsWith('recent-changes://')) {
const period = uri.replace('recent-changes://', '');
const changes = await getGitHistory(period);
return {
contents: [
{
uri,
mimeType: 'application/json',
text: JSON.stringify(changes, null, 2),
},
],
};
}
// Test coverage
if (uri.startsWith('test-coverage://')) {
const module = uri.replace('test-coverage://', '');
const coverage = await getCoverageMetrics(module);
return {
contents: [
{
uri,
mimeType: 'application/json',
text: JSON.stringify(coverage, null, 2),
},
],
};
}
// Performance metrics
if (uri.startsWith('performance-metrics://')) {
const endpoint = uri.replace('performance-metrics://', '');
const metrics = await getPerformanceMetrics(endpoint);
return {
contents: [
{
uri,
mimeType: 'application/json',
text: JSON.stringify(metrics, null, 2),
},
],
};
}
throw new Error(`Unknown resource: ${uri}`);
});
// Start server
const transport = new StdioServerTransport();
server.connect(transport);
Step 3: Implement Resource Analyzers
Architecture Graph Analyzer
// analyzers/architecture.ts
import { Project } from 'ts-morph';
import path from 'path';
interface ArchitectureGraph {
nodes: Node[];
edges: Edge[];
}
interface Node {
id: string;
name: string;
type: 'module' | 'class' | 'function';
file: string;
}
interface Edge {
from: string;
to: string;
type: 'imports' | 'calls' | 'extends';
}
export async function analyzeArchitecture(target: string): Promise<ArchitectureGraph> {
const project = new Project({
tsConfigFilePath: 'tsconfig.json',
});
const nodes: Node[] = [];
const edges: Edge[] = [];
// Analyze imports and dependencies
for (const sourceFile of project.getSourceFiles()) {
if (target !== 'project' && !sourceFile.getFilePath().includes(target)) {
continue;
}
const filePath = sourceFile.getFilePath();
const moduleName = path.basename(filePath, path.extname(filePath));
// Add module node
nodes.push({
id: moduleName,
name: moduleName,
type: 'module',
file: filePath,
});
// Find imports
for (const importDecl of sourceFile.getImportDeclarations()) {
const moduleSpecifier = importDecl.getModuleSpecifierValue();
// Only track internal imports
if (moduleSpecifier.startsWith('./') || moduleSpecifier.startsWith('../')) {
const targetModule = path.basename(moduleSpecifier, path.extname(moduleSpecifier));
edges.push({
from: moduleName,
to: targetModule,
type: 'imports',
});
}
}
// Find function calls
sourceFile.getFunctions().forEach(func => {
const funcName = func.getName();
if (funcName) {
nodes.push({
id: `${moduleName}.${funcName}`,
name: funcName,
type: 'function',
file: filePath,
});
}
});
}
return { nodes, edges };
}
Pattern Examples Finder
// analyzers/patterns.ts
import { Project, SyntaxKind } from 'ts-morph';
interface PatternExample {
file: string;
name: string;
code: string;
lineNumber: number;
}
export async function findPatternExamples(pattern: string): Promise<PatternExample[]> {
const project = new Project({
tsConfigFilePath: 'tsconfig.json',
});
const examples: PatternExample[] = [];
if (pattern === 'factory-functions') {
// Find all factory function patterns
for (const sourceFile of project.getSourceFiles()) {
const functions = sourceFile.getFunctions();
for (const func of functions) {
const name = func.getName();
// Factory pattern: starts with 'create' and returns object
if (name?.startsWith('create')) {
const returnType = func.getReturnType();
if (returnType.isObject()) {
examples.push({
file: sourceFile.getFilePath(),
name,
code: func.getText(),
lineNumber: func.getStartLineNumber(),
});
}
}
}
}
}
if (pattern === 'result-types') {
// Find all Result<T, E> patterns
for (const sourceFile of project.getSourceFiles()) {
const typeAliases = sourceFile.getTypeAliases();
for (const typeAlias of typeAliases) {
const name = typeAlias.getName();
if (name.endsWith('Result')) {
examples.push({
file: sourceFile.getFilePath(),
name,
code: typeAlias.getText(),
lineNumber: typeAlias.getStartLineNumber(),
});
}
}
}
}
return examples;
}
Git History Analyzer
// analyzers/git.ts
import { simpleGit } from 'simple-git';
interface GitChange {
commit: string;
author: string;
date: string;
message: string;
filesChanged: string[];
}
export async function getGitHistory(period: string): Promise<GitChange[]> {
const git = simpleGit();
// Parse period (e.g., 'last-week', 'last-month')
const periodMatch = period.match(/last-(\w+)/);
if (!periodMatch) {
throw new Error(`Invalid period format: ${period}`);
}
const unit = periodMatch[1];
let since: string;
switch (unit) {
case 'day':
since = '1 day ago';
break;
case 'week':
since = '1 week ago';
break;
case 'month':
since = '1 month ago';
break;
default:
throw new Error(`Unknown period unit: ${unit}`);
}
const log = await git.log({ '--since': since });
const changes: GitChange[] = [];
for (const commit of log.all) {
const diff = await git.show([commit.hash, '--name-only', '--format=']);
const filesChanged = diff.split('\n').filter(line => line.trim());
changes.push({
commit: commit.hash,
author: commit.author_name,
date: commit.date,
message: commit.message,
filesChanged,
});
}
return changes;
}
Step 4: Configure MCP Server in Claude Code
Add to Claude Code configuration:
// ~/.config/claude-code/mcp-servers.json
{
"mcpServers": {
"project-context": {
"command": "node",
"args": ["/path/to/your/project/mcp-server/dist/index.js"],
"env": {
"PROJECT_ROOT": "/path/to/your/project"
}
}
}
}
Step 5: Use in AI Prompts
Now AI agents can query specific context:
# Task: Refactor authentication module
Before implementing:
1. Query architecture graph: architecture-graph://auth
2. Find existing patterns: pattern-examples://factory-functions
3. Check recent changes: recent-changes://last-week
4. Review test coverage: test-coverage://auth
Then implement the refactor following existing patterns.
Use Cases
Use Case 1: Understanding Dependencies
Question: “What functions call authenticate()?”
MCP Query:
const graph = await mcp.read('architecture-graph://auth');
// Filter edges where target is 'authenticate'
const callers = graph.edges
.filter(edge => edge.to === 'authenticate' && edge.type === 'calls')
.map(edge => edge.from);
console.log('Functions calling authenticate():', callers);
// ['loginHandler', 'apiMiddleware', 'websocketAuth']
Static CLAUDE.md: Can’t answer—would need to manually document all call sites and keep updated
Use Case 2: Finding Similar Code
Question: “Show me how other services handle errors”
MCP Query:
const examples = await mcp.read('pattern-examples://error-handling');
examples.forEach(ex => {
console.log(`File: ${ex.file}`);
console.log(`Example:\n${ex.code}`);
});
Static CLAUDE.md: Contains generic examples, not actual code from your project
Use Case 3: Impact Analysis
Question: “What changed in the API layer this week?”
MCP Query:
const changes = await mcp.read('recent-changes://last-week');
const apiChanges = changes.filter(change =>
change.filesChanged.some(file => file.includes('src/api/'))
);
console.log(`${apiChanges.length} commits affected API layer`);
apiChanges.forEach(change => {
console.log(`${change.author}: ${change.message}`);
});
Static CLAUDE.md: Would need manual updates after every commit
Use Case 4: Identifying Coverage Gaps
Question: “Which modules need more tests?”
MCP Query:
const coverage = await mcp.read('test-coverage://all');
const lowCoverage = coverage
.filter(module => module.coverage < 80)
.sort((a, b) => a.coverage - b.coverage);
console.log('Modules needing more tests:');
lowCoverage.forEach(module => {
console.log(`${module.name}: ${module.coverage}%`);
});
Static CLAUDE.md: Can’t provide real-time test metrics
Use Case 5: Performance Optimization
Question: “Which API endpoints are slowest?”
MCP Query:
const metrics = await mcp.read('performance-metrics://api');
const slowest = metrics
.sort((a, b) => b.p95_latency - a.p95_latency)
.slice(0, 10);
console.log('Slowest endpoints (p95 latency):');
slowest.forEach(endpoint => {
console.log(`${endpoint.path}: ${endpoint.p95_latency}ms`);
});
Static CLAUDE.md: Can’t access runtime performance data
Benefits
Benefit 1: Always Up-to-Date
MCP server queries live codebase state:
Static CLAUDE.md:
Written: January 2025
Accuracy: Degrades over time
Maintenance: Manual updates required
MCP Server:
Generated: On-demand
Accuracy: Always reflects current state
Maintenance: Zero—automatic
Benefit 2: Targeted Context Loading
Load only what you need:
Static approach:
Load all docs (15K tokens)
Relevance: 10% used
Cost: High
MCP approach:
Query specific resources (2K tokens)
Relevance: 100% used
Cost: Low (87% reduction)
Benefit 3: Rich, Structured Data
MCP can return complex data structures:
// architecture-graph://auth response
{
"nodes": [
{ "id": "authenticate", "type": "function", "file": "auth.ts" },
{ "id": "verifyToken", "type": "function", "file": "auth.ts" },
{ "id": "loginHandler", "type": "function", "file": "handlers.ts" }
],
"edges": [
{ "from": "loginHandler", "to": "authenticate", "type": "calls" },
{ "from": "authenticate", "to": "verifyToken", "type": "calls" }
]
}
Static CLAUDE.md can describe this but can’t generate it dynamically.
Benefit 4: Real-Time Metrics
Access live data:
- Test coverage percentages
- API response times
- Error rates
- Git activity
- Bundle sizes
- Dependency versions
Benefit 5: Queryable Knowledge Graph
Think of MCP server as a knowledge graph API for your codebase:
Query: "Show all factory functions in auth module"
→ pattern-examples://factory-functions?module=auth
Query: "What changed in last 24 hours?"
→ recent-changes://last-day
Query: "Which files import user-service.ts?"
→ architecture-graph://project?target=user-service
Best Practices
1. Cache Expensive Queries
Some queries are expensive—cache results:
const cache = new Map<string, { data: any; timestamp: number }>();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function cachedQuery(uri: string, generator: () => Promise<any>) {
const cached = cache.get(uri);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await generator();
cache.set(uri, { data, timestamp: Date.now() });
return data;
}
// Use in resource handler
const graph = await cachedQuery(
uri,
() => analyzeArchitecture(target)
);
2. Provide Multiple Granularities
Offer both high-level and detailed views:
// High-level: just module names
architecture-graph://project?detail=low
// Medium: modules + exports
architecture-graph://project?detail=medium
// Detailed: full dependency graph
architecture-graph://project?detail=high
3. Include Documentation in Responses
Help AI agents understand the data:
{
"_meta": {
"description": "Architecture graph showing module dependencies",
"nodes": "Each node represents a module or function",
"edges": "Each edge represents a dependency relationship"
},
"nodes": [...],
"edges": [...]
}
4. Support Filters and Parameters
Make queries flexible:
// Filter by date
recent-changes://last-week?author=alice
// Filter by pattern
pattern-examples://factory-functions?module=auth
// Limit results
performance-metrics://api?limit=10&sort=latency
5. Combine with Static Documentation
Use both MCP and CLAUDE.md:
CLAUDE.md: Principles, patterns, standards (rarely change)
MCP Server: Examples, metrics, current state (change frequently)
# CLAUDE.md
## Factory Function Pattern
We use factory functions instead of classes.
For real examples from our codebase, query:
pattern-examples://factory-functions
Integration with Other Patterns
Combine with Hierarchical CLAUDE.md
Hierarchical CLAUDE.md provides principles, MCP provides examples:
# packages/auth/CLAUDE.md
## Authentication Pattern
All auth functions return AuthResult:
type AuthResult = {
success: boolean;
user?: User;
error?: string;
}
For real examples, query:
pattern-examples://auth-result
Combine with Knowledge Graph Retrieval
MCP server IS a knowledge graph implementation:
Knowledge Graph = MCP Server
└─ Resources = Graph Queries
└─ Relationships = Edges in graph
└─ Entities = Nodes in graph
Combine with Semantic Naming
Use semantic resource URIs:
// ✅ Semantic, self-documenting
pattern-examples://factory-functions
architecture-graph://auth-module
recent-changes://last-week
// ❌ Opaque identifiers
resource://1234
query://abc
data://xyz
Common Pitfalls
❌ Pitfall 1: Slow Queries
Problem: Analyzing entire codebase on every query
Solution: Cache results and use incremental analysis
// ❌ Slow: re-analyze everything
async function analyzeArchitecture() {
const project = new Project({ tsConfigFilePath: 'tsconfig.json' });
// Analyzes all 5000 files...
}
// ✅ Fast: incremental + cached
let cachedProject: Project | null = null;
async function analyzeArchitecture() {
if (!cachedProject) {
cachedProject = new Project({ tsConfigFilePath: 'tsconfig.json' });
}
// Only re-analyzes changed files
}
❌ Pitfall 2: Exposing Sensitive Data
Problem: MCP server returns API keys, secrets, or PII
Solution: Filter sensitive information
const SENSITIVE_PATTERNS = [
/API_KEY/,
/SECRET/,
/PASSWORD/,
/TOKEN/,
];
function sanitize(code: string): string {
let sanitized = code;
for (const pattern of SENSITIVE_PATTERNS) {
sanitized = sanitized.replace(pattern, '***REDACTED***');
}
return sanitized;
}
❌ Pitfall 3: Returning Too Much Data
Problem: Returning 10MB JSON response
Solution: Paginate and limit results
interface QueryOptions {
limit?: number;
offset?: number;
}
async function findPatternExamples(
pattern: string,
options: QueryOptions = {}
): Promise<PatternExample[]> {
const { limit = 10, offset = 0 } = options;
const allExamples = await findAllExamples(pattern);
return allExamples.slice(offset, offset + limit);
}
❌ Pitfall 4: No Error Handling
Problem: MCP server crashes on invalid queries
Solution: Validate inputs and return errors gracefully
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
try {
const uri = request.params.uri;
// Validate URI format
if (!isValidUri(uri)) {
return {
contents: [],
error: `Invalid URI format: ${uri}`,
};
}
// Handle query...
} catch (error) {
return {
contents: [],
error: `Failed to process query: ${error.message}`,
};
}
});
Measuring Success
Metrics to Track
1. Query Frequency
pattern-examples://factory-functions: 45 queries/week
architecture-graph://auth: 23 queries/week
recent-changes://last-week: 67 queries/week
Most valuable resources = Most queried
2. Context Size Reduction
Before MCP: Average 15K tokens per request
After MCP: Average 4K tokens per request
Reduction: 73%
3. AI Agent Success Rate
Before MCP: 72% success rate (tasks completed correctly)
After MCP: 89% success rate
Improvement: 24% relative increase
4. Query Latency
Target: <500ms for most queries
architecture-graph://project: 245ms ✓
pattern-examples://factory: 123ms ✓
recent-changes://last-week: 89ms ✓
Future Enhancements
1. Semantic Search Resource
code-search://semantic?query="authentication error handling"
// Returns semantically similar code, not just keyword matches
2. Code Generation Templates
template://generate?pattern=factory-function&name=createUserService
// Returns scaffold code based on existing patterns
3. Dependency Impact Analysis
impact-analysis://change?file=auth.ts&function=authenticate
// Returns all code affected by changing this function
4. Historical Metrics
performance-metrics://api/trends?period=last-month
// Returns performance trends over time
Conclusion
MCP servers transform static documentation into queryable, dynamic knowledge.
Static CLAUDE.md:
- ✅ Good for: Principles, patterns, standards
- ❌ Bad for: Current state, examples, metrics
MCP Server:
- ✅ Good for: Current state, real examples, live metrics
- ❌ Bad for: General principles (better in docs)
Best approach: Combine both:
- CLAUDE.md: Timeless principles and patterns
- MCP Server: Current examples and state
- AI Agent: Queries MCP for specifics, reads CLAUDE.md for principles
The result: AI agents with access to both how things should be (CLAUDE.md) and how things actually are (MCP server)—the perfect combination for high-quality code generation.
Related Concepts
- Hierarchical Context Patterns – CLAUDE.md files complement MCP with static principles
- Context Debugging Framework – MCP servers address Layer 1 context issues
- Context Rot Auto-Compacting – MCP provides fresh context avoiding rot
- Progressive Disclosure Context – MCP enables on-demand context loading
- Clean Slate Trajectory Recovery – Fresh MCP queries for new trajectories
- Sliding Window History – Bounded state for MCP server caching
- Semantic Naming Patterns – Semantic resource URIs for discoverability
- Prompt Caching Strategy – Cache MCP responses for cost efficiency
- Information Theory in Coding Agents – Theoretical foundation for context management
- Building the Factory – MCP servers are high-leverage meta-infrastructure
- Sub-Agent Architecture – MCP servers provide dynamic context to specialized agents
References
- Model Context Protocol Specification – Official MCP specification and documentation
- MCP TypeScript SDK – TypeScript SDK for building MCP servers
- ts-morph Documentation – TypeScript compiler API wrapper for code analysis
- simple-git Documentation – Git integration for Node.js

