Agent Search Observation Loop: Learning What Context to Provide

James Phoenix
James Phoenix

Watch how the agent navigates your codebase. What it searches for tells you what to hand it next time.

Author: James Phoenix | Date: March 2026


Summary

Every file read, grep, and tool call your coding agent makes is visible. Watching this navigation process (not just the final output) teaches you what context the agent needs for different task types. Over time, you build an intuition for “pinpointing”: providing the exact files, line ranges, and architectural context upfront so the agent spends its capacity on the actual work instead of orientation. This is a skill-building feedback loop where the agent’s search behavior is the curriculum.


The Observation

When you give an agent a task, it searches before it builds. That search is not wasted time for you. It is a lesson.

You: Add a damage popup system that shows floating numbers
     when enemies take hits.

You watch the agent work:

Agent: [reads PlayerHealth.cs]
Agent: [searches project for "event"]
Agent: [reads EventBus.cs]
Agent: [searches for "UI" and "popup" in the project]
Agent: [reads UIManager.cs]
Agent: [searches for "event" again, looking for subscription patterns]
Agent: [reads EventBus.cs again]

...then it starts writing code.

Two things happened:

  1. The agent spent a significant chunk of its context window on navigation, reading files and running searches to build a mental map of your architecture.
  2. You just learned something: for any work that touches gameplay events and UI, the agent needs EventBus.cs, UIManager.cs, and the relevant gameplay script.

The Loop

┌─────────────────────────────────────────────────┐
│                                                 │
│   Give agent a task                             │
│              │                                  │
│              v                                  │
│   Watch which files it reads and searches it    │
│   runs before producing output                  │
│              │                                  │
│              v                                  │
│   Note the pattern: "For task type X, the       │
│   agent consistently needs files A, B, C"       │
│              │                                  │
│              v                                  │
│   Next time, provide A, B, C upfront            │
│              │                                  │
│              v                                  │
│   Agent skips navigation, starts building       │
│              │                                  │
│              └──────────── loop ─────────────────┘
│
└─────────────────────────────────────────────────┘

This is not the same as online learning via constraints, which covers observing failures and adding constraints. This loop observes navigation behavior and adds upfront context. Both make the agent more effective. They target different bottlenecks.

Online Learning via Constraints Agent Search Observation
Observes: agent output failures Observes: agent search behavior
Produces: constraints (rules, types, tests) Produces: context bundles (files, line ranges)
Shrinks: action space (what it can do) Shrinks: navigation cost (what it must find)
Prevents: bad output Prevents: wasted context window

Pinpointing

The skill this loop builds is called pinpointing: handing the agent exactly the context it needs so all capacity goes to the real work.

Before pinpointing (agent navigates)

You: Add a damage popup system that shows floating numbers
     when enemies take hits.

Agent: [6-8 file reads and searches before writing any code]

After pinpointing (you provide context)

You: Add a damage popup system. Here's the context you'll need:
- Assets/Scripts/Events/EventBus.cs (our event system)
- Assets/Scripts/UI/UIManager.cs (handles all UI instantiation)
- Assets/Scripts/Combat/DamageHandler.cs:45-62 (where damage is applied)

Show floating damage numbers above enemies when they take hits.
Subscribe to the OnDamageDealt event in EventBus.

Agent: [starts writing code immediately]

The agent did not get smarter. You got more specific. Every file read the agent skipped is context window capacity redirected toward the actual task.


What to Watch For

Search patterns that repeat across tasks

If the agent always reads EventBus.cs before touching gameplay systems, that file belongs in every gameplay prompt. If it always searches for your ORM config before writing database code, provide it upfront.

Redundant reads

When the agent reads the same file twice in one session, it is struggling to retain information from earlier in the context window. This signals either the session is too long or the file is central enough to include explicitly.

Long search chains before any output

The more steps between your prompt and the first line of generated code, the more navigation cost you can eliminate next time. Count the reads. Three or fewer is fine. Eight means you can probably cut that in half by providing context.

Files the agent reads but does not use

Sometimes the agent reads a file, determines it is irrelevant, and moves on. These are false leads. Noting them helps you give tighter context next time, not just more context.

Udemy Bestseller

Learn Prompt Engineering

My O'Reilly book adapted for hands-on learning. Build production-ready prompts with practical exercises.

4.5/5 rating
306,000+ learners
View Course

Building a Context Map

Over time, patterns emerge. You can formalize them as a mental map or a written reference:

Task Type Files the agent consistently needs
Gameplay events EventBus.cs, relevant gameplay script
UI changes UIManager.cs, relevant component, theme config
Database work ORM config, relevant repository, schema file
API endpoints Router config, relevant controller, middleware chain
Auth changes AuthService, session config, relevant middleware

This map is personal to your codebase. No documentation or tutorial will give it to you. Only observation builds it.


The Wrong-Diagnosis Trap

Pinpointing has one subtle failure mode. You can give the agent the right file, the right line number, and a wrong root cause.

You: Fix the null reference in EnemySpawner.cs:72.
     The waveConfig list isn't being initialized.

Agent: [adds a null check and initializes waveConfig in Awake()]

The null reference goes away. But the real bug was that another script called SpawnWave() before the scene finished loading, so waveConfig had not been populated from the ScriptableObject yet. The initialization “fix” masked a timing issue that will resurface later.

The agent followed your wrong diagnosis faithfully. It will not second-guess explicit instructions.

Rule: Pinpoint the location, but be honest about what you know and don’t know. If you are sure of the cause, say so. If you are not, say “the null reference happens here, but I’m not sure why” and let the agent investigate.

Precision is not the same as accuracy. A precise but wrong diagnosis is worse than a vague but honest one, because the vague prompt at least gives the agent room to search for the real cause.


Key Insight

The agent’s search process is your curriculum. What it looks for tells you what to provide. What it reads twice tells you what matters. What it reads and discards tells you what to exclude. Over time, you stop making it search and start making it build.


Related

Topics
Agent WorkflowContext ProvisionDeveloper SkillFeedback LoopNavigation CostObservationPinpointingSearch Behavior

More Insights

Cover Image for The Two Camps of Agentic Coding

The Two Camps of Agentic Coding

One camp talks to models. The other camp specifies systems. The second camp is where the real leverage lives.

James Phoenix
James Phoenix
Cover Image for Traditional ML vs AI Engineers

Traditional ML vs AI Engineers

The fundamental difference is the **order of operations**.

James Phoenix
James Phoenix