The art of structuring information for LLM agents to maximize both token efficiency and comprehension.
Articles
Foundations
- 12 Factor Agents – Principles for building production-ready LLM agents
- Writing a Good CLAUDE.md – Crafting effective agent instructions
- Building the Harness – The four-layer harness around Claude Code
- Building the Factory – Automation stages for productivity
- The RALPH Loop – Fresh context iteration for compound development
- Agent Reliability Chasm – Why 95% of agent PoCs fail in production
- Six Waves of AI Coding – Evolution from completions to agent fleets
- Information Theory for Coding Agents – Signal, entropy, and token efficiency
Agent Patterns
- Agent Swarm Patterns – Multiple agents x multiple runs for confidence
- Agent Capabilities: Tools & Eyes – MCPs, CLIs, linters as capability multipliers
- Sub-agents: Accuracy vs Latency – When to delegate for higher accuracy
- Sub-Agent Architecture – Implementation patterns for agent delegation
- Meta-Questions for Recursive Agents – Universal questions for self-verifying agents
- Agent-Native Architecture – Designing software where agents are first-class citizens
- Cursor Agent Workflows – Practical patterns for coding agent effectiveness
- Parallel Agents for Monorepos – Scaling agents across large codebases
- Agentic Tool Detection – Detecting and leveraging available tools
- Actor-Critic Adversarial Coding – Adversarial patterns for code quality
Prompting Techniques
- Chain of Thought Prompting – Step-by-step reasoning for complex tasks
- Constraint-Based Prompting – WHAT vs HOW instruction style and scope management
- Few-Shot Prompting with Project Examples – Real examples from your codebase
- Layered Prompts Architecture – Hierarchical prompt organization
- Multi-Step Prompt Workflows – Chaining prompts for complex tasks
- Upfront Questioning Narrows Search Space – Ask questions before implementing vague specs
Context Management
- Context-Efficient Backpressure – Managing verbose output for coding agents
- Context Rot & Auto-Compacting – Preventing context degradation
- Context Debugging Framework – Diagnosing context-related failures
- Progressive Disclosure of Context – Load only relevant context for current task
- Hierarchical Context Patterns – Organizing context at multiple levels
- Hierarchical Rule Files Collocation – Co-locating rules with code
- Sliding Window History – Managing conversation history efficiently
- FP Increases LLM Signal – Errors as values give LLMs complete signal
- ADRs for Agent Context – Architecture decisions as agent guardrails
- DDD Bounded Contexts for LLMs – Domain-driven design for agent scope
Verification & Testing
- Constraint-First Development – Define what must be true, let the system find how
- The Verification Ladder – Six levels from types to formal proofs
- Verification Sandwich Pattern – Before/after code generation checks
- Stateless Verification Loops – Repeatable verification without state
- Trust But Verify Protocol – Structured verification for LLM output
- Property-Based Testing – Testing invariants for generated code
- Invariants in LLM Code Generation – Constraint spaces for correctness
- Making States Illegal – Prevention-based enforcement
- Test-Driven Prompting – Write tests before code generation
- Test-Based Regression Patching – Using tests to guide fixes
- Test Custom Infrastructure – Testing agent infrastructure
- Integration Testing Patterns – End-to-end testing for agents
- Evaluation-Driven Development – AI vision for qualitative evaluation
- Closed-Loop Telemetry Optimization – OTEL as control input
Quality Gates & Linting
- Quality Gates as Information Filters – Gates that filter signal from noise
- Compounding Effects of Quality Gates – Multiplicative benefits over time
- Claude Code Hooks as Quality Gates – Automated pre/post checks
- LLM Code Review in CI – Automated PR reviews via GitHub Actions
- Early Linting Prevents Ratcheting – Catch issues before they compound
- Custom ESLint Rules for Determinism – Enforcing patterns via linting
- AST-Grep for Precision – Structural code search and transforms
Development Workflows
- Learning Loops – Encode problems into prevention
- Prompts Are the Asset – Preserve conversations, not just code
- Ad-hoc to Scripts – Convert repeated flows to deterministic execution
- Highest Leverage: Plans & Validation – Where engineers have maximum impact
- Incremental Development Pattern – Small validated increments
- Plan Mode Strategic Use – When and how to use planning modes
- LLM Usage Modes: Explore vs Implement – Different modes for different tasks
- Git Worktrees for Parallel Dev – Multiple workspaces for agent tasks
- 24/7 Development Strategy – Continuous development with agents
- YOLO Mode Configuration – When to remove guardrails
- Clean Slate Trajectory Recovery – Recovering from bad agent states
Error Handling & Debugging
- Error Messages as Training – Errors that teach the agent
- Five-Point Error Diagnostic Framework – Structured error analysis
- Flaky Test Diagnosis Script – Automated flaky test detection
- Prevention Protocol – Systematic error prevention
- Negative Examples Documentation – What NOT to do
Model & Provider Strategy
- Model Switching Strategy – When to use which model
- Model Provider Agnostic Approach – Avoiding vendor lock-in
- Prompt Caching Strategy – Optimizing for cache hits
- LLM as Recursive Function – Mental model for LLM behavior
- Entropy in Code Generation – Managing randomness in output
Infrastructure & Tooling
- MCP Server for Project Context – Model Context Protocol integration
- Symlinked Agent Configs – Sharing configs across projects
- Boundary Enforcement in Layered Architecture – Architectural boundaries
- Institutional Memory via Learning Files – Persistent knowledge across sessions
- AI Cost Protection & Timeouts – Budget controls for agents
- AI Workflow Notifications – Alerting on agent activity
- Playwright Script Loop – Browser automation for agents
Philosophy & Identity
- The Meta-Engineer Identity – Building systems that build systems
- Skill Atrophy – What to keep sharp, what to let go
- Human-First DX Philosophy – Developer experience for agent-assisted coding
- Semantic Naming Patterns – Names that convey intent
- One-Way Pattern Consistency – Single patterns for clarity
- Zero Friction Onboarding – Fast starts for agents and humans
Planning & Refinement
- Meta-Ticket Refinement – Breaking down complex tasks
- Type-Driven Development – Types as specifications
Core Principles
- Own your context window – Structure information deliberately, don’t rely on framework defaults
- Deterministic beats non-deterministic – Use code to control what you can, reserve LLMs for decisions
- Small, focused agents – Scope agents to 3-20 steps; performance degrades with context growth
- Progressive disclosure – Load only relevant context for the current task
- Backpressure on output – Compress verbose output; only show errors in full
Key Insight
“Most ‘AI agents’ in production aren’t pure agentic systems. They’re predominantly deterministic code with targeted LLM decision-making.”
Sources
- Foundation articles adapted from HumanLayer
- Cursor Blog – Agent best practices
- Sourcegraph Blog – Steve Yegge on AI coding evolution
- Every.to – Agent-native architecture guide
- Vinci Rufus – Compound engineering and agent reliability
- Agentic Patterns – 150+ production-ready agent patterns by @nibzard
- Additional articles by James Phoenix
Related
- Thought Leaders – People to follow in compound engineering

