Agent-Native Architecture

James Phoenix
James Phoenix

Designing software where AI agents are first-class citizens, not bolted-on features.


Core Thesis

A well-designed coding agent is actually a general-purpose agent. The same loop architecture enabling autonomous code refactoring can orchestrate any domain-specific task.

“Features are outcomes achieved by an agent operating in a loop—not code you ship.”

Traditional software: developers choreograph all behavior.
Agent-native software: developers describe desired outcomes; agents pursue them with judgment.


The Five Principles

1. Parity

“Whatever the user can do through the UI, the agent should be able to achieve through tools.”

This is non-negotiable. If UI capabilities don’t translate to agent capabilities, the architecture fails.

Parity audit: List every UI action. Verify agents can achieve each outcome through available tools.


2. Granularity

“Tools should be atomic primitives. Features are outcomes achieved by an agent operating in a loop.”

Anti-Pattern (Less Granular) Proper Approach (More Granular)
classify_and_organize_files(files) read_file, write_file, move_file, bash
You wrote the decision logic Agent makes decisions
Agent executes your choreography You edit prompts to change behavior

Key insight: Bundling decision logic into tools moves judgment back into code, defeating the agent-native approach.


3. Composability

With atomic tools and full parity, new features emerge by writing new prompts—not new code.

A “weekly review” feature becomes:

"Review files modified this week. Summarize key changes.
Based on incomplete items and approaching deadlines,
suggest three priorities for next week."

The agent composes list_files, read_file, and its own judgment. Pure outcome-driven behavior.


4. Emergent Capability

Agents accomplish tasks designers never anticipated by creatively combining existing tools.

Example: “Cross-reference my meeting notes with my task list and tell me what I’ve committed to but haven’t scheduled.”

No commitment tracker exists. But agents with access to notes and tasks can accomplish this through composition.


5. Improvement Over Time

Applications improve without shipping code through:

  • Accumulated context: Persistent state across sessions
  • Prompt refinement: At both developer and user levels

Architectural Patterns

Files as Universal Interface

Design for what agents naturally understand:

  • Agents are already fluent in file operations
  • Users can inspect and modify directly
  • Self-documenting structure
  • Cross-device sync via filesystem primitives

Shared Workspace

Agents and users should work in identical data spaces—not separate sandboxes. This enables:

  • Inspection of agent work
  • Direct modification
  • No synchronization complexity

Entity-Scoped Directories

{entity_type}/{entity_id}/
├── primary content
├── metadata
└── related materials

The context.md Pattern

Maintain portable working memory:

  • Agent role and identity
  • User preferences and interests
  • Available resources
  • Recent activity
  • Current state and guidelines

Execution Patterns

Explicit Completion Signals

Don’t detect completion through heuristics. Require explicit signals:

Udemy Bestseller

Learn Prompt Engineering

My O'Reilly book adapted for hands-on learning. Build production-ready prompts with practical exercises.

4.5/5 rating
306,000+ learners
View Course
.success("Result")   // continue
.error("Message")    // continue (retry)
.complete("Done")    // stop loop

Model Tier Selection

Match intelligence level to task complexity:

  • Research agents: balanced tiers
  • Simple classification: fast tiers
  • Don’t default to most powerful model

Partial Completion Tracking

For multi-step tasks, track progress:

  • pendingin_progresscompleted
  • Resume from checkpoints when interrupted

Autonomy and Approval Matrix

Stakes Reversibility Pattern Example
Low Easy Auto-apply File organization
Low Hard Quick confirm Feed publishing
High Easy Suggest + apply Code changes
High Hard Explicit approval Email sending

Product Development Shift

Traditional: Imagine what users want → Build it → See if you’re right

Agent-native: Build capable foundation → Observe what users ask agents to do → Formalize patterns that emerge

  • Successful agent requests = signal of value
  • Failed requests = gaps in tools or parity

This is emergent product development—user behavior with agents reveals latent demand.


Terminology Reference

Term Definition
Agent-Native Software where agents are first-class; features are outcomes described in prompts
Tool Atomic primitive capability the agent can invoke
Feature Outcome achieved by agent operating in a loop with tools
Parity Capability equivalence between UI and agent access
Emergent Capability Unexpected accomplishments from creative tool combination
Latent Demand User request patterns revealing what features should be formalized

Key Principle

“Simple to start but endlessly powerful. Basic requests work with zero learning curve. Power users push beyond anticipated boundaries.”

Agent-native systems let you discover what features should exist by observing what users ask agents to do, rather than guessing upfront.


Related

Topics
Agent Native ArchitectureAi AgentsAutonomous AgentsDesign PatternsSoftware Architecture

More Insights

Cover Image for Thought Leaders

Thought Leaders

People to follow for compound engineering, context engineering, and AI agent development.

James Phoenix
James Phoenix
Cover Image for Systems Thinking & Observability

Systems Thinking & Observability

Software should be treated as a measurable dynamical system, not as a collection of features.

James Phoenix
James Phoenix