Context Engineering

James Phoenix
James Phoenix

The art of structuring information for LLM agents to maximize both token efficiency and comprehension.


Articles

Foundations

Agent Patterns

Prompting Techniques

Context Management

Verification & Testing

Quality Gates & Linting

Development Workflows

Error Handling & Debugging

Model & Provider Strategy

Infrastructure & Tooling

Philosophy & Identity

Planning & Refinement


Core Principles

  1. Own your context window – Structure information deliberately, don’t rely on framework defaults
  2. Deterministic beats non-deterministic – Use code to control what you can, reserve LLMs for decisions
  3. Small, focused agents – Scope agents to 3-20 steps; performance degrades with context growth
  4. Progressive disclosure – Load only relevant context for the current task
  5. Backpressure on output – Compress verbose output; only show errors in full

Key Insight

“Most ‘AI agents’ in production aren’t pure agentic systems. They’re predominantly deterministic code with targeted LLM decision-making.”


Sources


Related

Topics
Agent ReliabilityContext EngineeringInformation TheoryLlmToken Efficiency

More Insights

Cover Image for LLM VCR and Agent Trace Hierarchy: Deterministic Replay for Agent Pipelines

LLM VCR and Agent Trace Hierarchy: Deterministic Replay for Agent Pipelines

Three patterns that turn agent pipelines from opaque prompt chains into debuggable, reproducible engineering systems: (1) an LLM VCR that records and replays model interactions, (2) a Run > Step > Mes

James Phoenix
James Phoenix
Cover Image for Agent Search Observation Loop: Learning What Context to Provide

Agent Search Observation Loop: Learning What Context to Provide

Watch how the agent navigates your codebase. What it searches for tells you what to hand it next time.

James Phoenix
James Phoenix