Context Engineering

James Phoenix
James Phoenix

The art of structuring information for LLM agents to maximize both token efficiency and comprehension.


Articles

Foundations

Agent Patterns

Prompting Techniques

Context Management

Verification & Testing

Quality Gates & Linting

Development Workflows

Error Handling & Debugging

Model & Provider Strategy

Infrastructure & Tooling

Philosophy & Identity

Planning & Refinement


Core Principles

  1. Own your context window – Structure information deliberately, don’t rely on framework defaults
  2. Deterministic beats non-deterministic – Use code to control what you can, reserve LLMs for decisions
  3. Small, focused agents – Scope agents to 3-20 steps; performance degrades with context growth
  4. Progressive disclosure – Load only relevant context for the current task
  5. Backpressure on output – Compress verbose output; only show errors in full

Key Insight

“Most ‘AI agents’ in production aren’t pure agentic systems. They’re predominantly deterministic code with targeted LLM decision-making.”


Sources


Related

Topics
Agent ReliabilityContext EngineeringInformation TheoryLlmToken Efficiency

Newsletter

Become a better AI engineer

Weekly deep dives on production AI systems, context engineering, and the patterns that compound. No fluff, no tutorials. Just what works.

Join 306K+ developers. No spam. Unsubscribe anytime.


More Insights

Cover Image for The Semantic Triangle: Mock Screens, PoC Backend, and Spec File Beat Any One Alone

The Semantic Triangle: Mock Screens, PoC Backend, and Spec File Beat Any One Alone

Three artefacts. Three reduced ambiguities. One projection task instead of three inventions.

James Phoenix
James Phoenix
Cover Image for Contracts Parallelize Agents

Contracts Parallelize Agents

If you’re waiting for Agent A to finish before starting Agent B, you’re wasting time. Define the contract between them and dispatch both now.

James Phoenix
James Phoenix