Two Modes of LLM Usage: Exploring vs Implementing

James Phoenix
James Phoenix

The Problem

When working with AI coding agents, developers often jump straight to code generation:

Developer: "Implement JWT authentication for the API"
LLM: *generates 200 lines of authentication code*
Developer: *realizes it doesn't match existing patterns*
Developer: "Refactor to match our auth patterns"
LLM: *generates different code, still not quite right*
Developer: *repeats 3-4 more times*

This generate-and-refine loop is inefficient because:

  1. No context understanding: LLM doesn’t know your patterns until after making mistakes
  2. Architectural mismatches: Generated code may conflict with existing design
  3. Expensive iterations: Each generation costs tokens and time
  4. Poor mental models: You don’t understand the solution, just accepting LLM output

Why This Happens

Developers treat LLMs as code generators first, when they should be used as knowledge assistants first.

The rush to get code written skips the crucial step of understanding the problem space.

The Solution: Two Distinct Modes

Use LLMs in two sequential modes:

Udemy Bestseller

Learn Prompt Engineering

My O'Reilly book adapted for hands-on learning. Build production-ready prompts with practical exercises.

4.5/5 rating
306,000+ learners
View Course

Mode 1: Exploring (Learning)

Use the LLM to understand before you implement:

  • Ask questions about the codebase
  • Learn existing patterns and conventions
  • Understand architectural decisions
  • Evaluate trade-offs between approaches
  • Build mental models of how systems work

Goal: Gain understanding, not generate code.

Mode 2: Implementing (Building)

Once you understand, use the LLM to generate code:

  • Provide informed prompts based on exploration
  • Reference specific patterns discovered
  • Request code that fits architectural decisions
  • Verify against mental model from exploration

Goal: Generate correct code on first try.

How It Works

Phase 1: Exploration Mode

Before writing any code, use the LLM as an interactive knowledge base.

Example Exploration Questions

Understanding existing patterns:

"How does authentication currently work in this codebase?"
"What design patterns are used in the payment flow?"
"Where are database transactions handled?"
"Show me examples of how we handle errors in API endpoints"

Evaluating approaches:

"What are the trade-offs between JWT and session-based auth for our use case?"
"Should I use React Context or Redux for this feature?"
"Compare approaches A and B for implementing rate limiting"
"What are the security implications of approach X?"

Learning conventions:

"What naming conventions do we use for API endpoints?"
"How should I structure test files for this feature?"
"What's the standard error handling pattern?"
"Where should configuration live for this new service?"

Anticipating issues:

"What are common pitfalls when implementing feature X?"
"What edge cases should I consider for user input validation?"
"How do we handle race conditions in concurrent operations?"
"What performance issues might arise with this approach?"

Benefits of Exploration

  1. Faster implementation: Armed with knowledge, your prompts are precise
  2. Fewer iterations: Understanding patterns means less trial-and-error
  3. Better architecture: Catch design issues before writing code
  4. Deeper learning: You understand why, not just what
  5. Reduced token costs: Fewer generate-refine cycles

Phase 2: Implementation Mode

Now that you understand the landscape, generate code with informed prompts.

Example Implementation Prompts

Bad prompt (no exploration):

"Add JWT authentication to the API"

Good prompt (after exploration):

"Implement JWT authentication for the API following our existing auth pattern:

- Use the AuthService class pattern (see /services/AuthService.ts)
- Follow the Result<T> return type convention for error handling
- Store tokens in httpOnly cookies (security requirement from exploration)
- Include refresh token rotation (discovered this is our standard)
- Add rate limiting using existing RateLimiter middleware
- Write integration tests following /tests/integration/auth.test.ts pattern

Key functions needed:
- authenticateUser(credentials): Result<TokenPair>
- refreshAccessToken(refreshToken): Result<AccessToken>
- verifyToken(token): Result<JWTPayload>

Ensure it integrates with our existing User model and database transaction patterns."

Result: LLM generates correct code on first try because the prompt is informed by exploration.

Real-World Workflow

Scenario: Adding a Payment Processing Feature

Without Exploration (Inefficient)

Dev: "Implement Stripe payment processing"
LLM: *generates code using Stripe SDK directly in controller*

Dev: "This doesn't match our service layer pattern"
LLM: *moves to service layer, but still couples directly to Stripe*

Dev: "We need to support multiple payment providers"
LLM: *refactors with strategy pattern, but wrong interface*

Dev: "The interface needs to match our existing PaymentProvider interface"
LLM: *refactors again*

Result: 4 iterations, 800+ lines generated, 30 minutes wasted

With Exploration (Efficient)

Step 1: Explore (5 minutes)

Dev: "How do we currently handle payment processing?"
LLM: "You use a PaymentProvider interface with implementations for each
provider. See /services/payments/PaymentProvider.ts..."

Dev: "Show me an example implementation"
LLM: *shows PayPalProvider.ts example*

Dev: "What error handling pattern do we use for payment failures?"
LLM: "You return Result<PaymentResult, PaymentError> and log to
PaymentAuditLog. Errors are categorized as RETRIABLE vs PERMANENT..."

Dev: "What tests should I write?"
LLM: "Follow the pattern in /tests/payments/paypal.test.ts:
unit tests for provider implementation, integration tests with
mocked provider API, end-to-end tests in staging..."

Step 2: Implement (2 minutes)

Dev: "Create StripeProvider implementing our PaymentProvider interface.

Requirements from exploration:
- Implement PaymentProvider interface (/services/payments/PaymentProvider.ts)
- Return Result<PaymentResult, PaymentError> for error handling
- Log all operations to PaymentAuditLog
- Categorize errors as RETRIABLE vs PERMANENT
- Use existing StripeConfig from config service
- Handle webhooks following WebhookHandler pattern
- Write tests following /tests/payments/paypal.test.ts pattern

Implement these methods:
- processPayment(amount, currency, customer)
- refundPayment(transactionId, amount?)
- getTransactionStatus(transactionId)
"

LLM: *generates correct implementation on first try*

Result: 1 iteration, 250 lines, 7 minutes total (including exploration)

Savings: 23 minutes, 550 fewer lines generated, better code quality.

When to Use Each Mode

Use Exploration Mode When:

  1. Starting a new feature: Understand existing patterns first
  2. Unfamiliar codebase: Learn structure and conventions
  3. Evaluating approaches: Compare options before committing
  4. Debugging complex issues: Understand system behavior
  5. Learning new technology: Build mental models
  6. Planning architecture: Evaluate design decisions

Use Implementation Mode When:

  1. You understand the patterns: Clear mental model exists
  2. Repeating established patterns: Just need code written
  3. Simple, isolated changes: Low coupling to existing code
  4. Following explicit examples: Direct template to follow

Use Both Modes When:

  1. Complex features: Explore architecture, then implement
  2. Refactoring: Understand current design, then generate new code
  3. Integration work: Explore both systems, then bridge them
  4. Performance optimization: Explore bottlenecks, then optimize

Best Practices

1. Always Start with Questions

Before generating any code, ask at least 3 exploration questions:

"How do we currently handle X?"
"What patterns should I follow?"
"What are common pitfalls?"

Then:
"Implement X following discovered patterns"

2. Build Mental Models

Don’t just accept LLM answers—internalize them:

// Passive: Just copying what LLM said
Dev: "What's our error handling pattern?"
LLM: "You use Result<T, E>"
Dev: "Ok, generate code using that"

// Active: Building understanding
Dev: "What's our error handling pattern?"
LLM: "You use Result<T, E>"
Dev: "Show me 3 examples from the codebase"
LLM: *shows examples*
Dev: "Why Result<T, E> instead of throwing exceptions?"
LLM: "Because it makes errors explicit and type-safe..."
Dev: *now understands the reasoning*
Dev: "Implement feature X using Result<T, E> pattern"

3. Document Your Exploration

Capture insights from exploration mode:

# Payment Feature - Exploration Notes

## Existing Patterns
- PaymentProvider interface: /services/payments/PaymentProvider.ts
- Result<T, E> error handling
- PaymentAuditLog for all operations
- Webhook handling via WebhookHandler

## Design Decisions
- Support multiple providers via strategy pattern
- RETRIABLE vs PERMANENT error categorization
- All payment operations must be idempotent
- Use optimistic locking for concurrent payments

## Tests Required
- Unit tests: Provider implementation logic
- Integration tests: Mocked provider API
- E2E tests: Full payment flow in staging

Benefit: Reference this during implementation for better prompts.

4. Verify Understanding Before Implementing

After exploration, summarize your understanding:

Dev: "Let me verify my understanding:

1. I need to implement PaymentProvider interface
2. Use Result<T, E> for error handling
3. Log everything to PaymentAuditLog
4. Categorize errors as RETRIABLE or PERMANENT
5. Make operations idempotent
6. Handle webhooks with WebhookHandler

Is this correct? Anything I'm missing?"

LLM: "Correct, but also remember to:
- Use StripeConfig from config service
- Implement transaction rollback on payment failures
- Rate limit webhook endpoints"

Dev: *updates understanding, then implements*

5. Iterate on Understanding, Not Code

When exploration reveals gaps, ask more questions—don’t generate code yet:

Bad:
Dev: "How do we handle payments?"
LLM: *brief explanation*
Dev: "Implement payment processing"
*generates code with gaps*
*multiple refactoring cycles*

Good:
Dev: "How do we handle payments?"
LLM: *brief explanation*
Dev: "Show me the PaymentProvider interface"
Dev: "What error types are used?"
Dev: "How are webhooks handled?"
Dev: "Show me a complete example implementation"
*now has full picture*
Dev: "Implement Stripe provider following this pattern"
*correct code on first try*

Measuring Success

Metrics to Track

Code generation efficiency:

Iterations to correct code:
- Without exploration: 3-5 iterations average
- With exploration: 1-2 iterations average

Savings: 60% fewer iterations

Development time:

Total time (exploration + implementation):
- Without exploration: 30 min (5 min x 6 iterations)
- With exploration: 12 min (5 min exploration + 7 min implementation)

Savings: 60% faster development

Code quality:

Pattern violations:
- Without exploration: 40% of generated code violates patterns
- With exploration: 5% violations (minor style issues)

Improvement: 8x fewer pattern violations

Learning retention:

Similar feature 2 weeks later:
- Without exploration: Still need same iterations (no learning)
- With exploration: 1 iteration (retained mental model)

Benefit: Compounding knowledge over time

Common Pitfalls

Pitfall 1: Skipping Exploration for “Simple” Tasks

Problem: Assuming you know the patterns

Dev: "This is simple, just add a new endpoint"
*generates code*
*doesn't match naming convention*
*doesn't use middleware pattern*
*missing error handling*
*3 iterations to fix*

Solution: Even “simple” tasks benefit from 2-3 exploration questions.

Pitfall 2: Exploration Without Implementation

Problem: Asking questions but never using the insights

Dev: *asks 20 questions*
Dev: *gets thorough answers*
Dev: *still writes vague prompt*
Dev: "Add authentication"

Solution: Reference exploration insights in implementation prompts.

Pitfall 3: Mixing Modes

Problem: Jumping between exploration and implementation

Dev: "How do we handle auth?" (explore)
Dev: "Add JWT auth" (implement)
Dev: "Wait, what about refresh tokens?" (explore)
Dev: "Update to add refresh tokens" (implement)
Dev: "How do we store tokens?" (explore)
*chaotic, inefficient*

Solution: Complete exploration phase before implementation.

Pitfall 4: Passive Exploration

Problem: Accepting first answer without deeper investigation

Dev: "How do we handle errors?"
LLM: "We use try-catch blocks"
Dev: "Ok" *implements with try-catch*
*actually we use Result<T, E> pattern in this codebase*

Solution: Ask for examples, verify with codebase, dig deeper.

Integration with Other Patterns

Combine with Hierarchical CLAUDE.md

Exploration mode helps you discover which CLAUDE.md files are relevant:

Dev: "I need to add a payment feature. What context should I read?"
LLM: "Check:
- /services/payments/CLAUDE.md (payment patterns)
- /services/CLAUDE.md (service layer conventions)
- /tests/CLAUDE.md (testing requirements)"

Dev: *reads relevant files*
Dev: *provides focused implementation prompt*

Combine with Human-in-the-Loop Planning

Use exploration to inform your plan:

1. Exploration: Understand existing patterns
2. Plan: Design approach using exploration insights
3. Review: Verify plan matches architecture
4. Implement: Generate code based on plan

Combine with Test-Driven Development

Explore test patterns before implementing:

Dev: "What test patterns do we use for services?"
LLM: *explains integration test pattern*
Dev: "Write tests for payment service following this pattern"
LLM: *generates tests*
Dev: "Now implement payment service to pass these tests"
LLM: *generates correct implementation*

Real-World Examples

Example 1: Database Migration

Without exploration:

Dev: "Create migration to add email verification"
LLM: *generates migration*
*uses wrong migration tool (Knex instead of Prisma)*
*wrong naming convention*
*missing rollback*

With exploration:

Dev: "What migration tool do we use?"
LLM: "Prisma"

Dev: "What's the naming convention?"
LLM: "YYYYMMDDHHMMSS_descriptive_name.sql"

Dev: "Show me an example migration"
LLM: *shows example with up and down migrations*

Dev: "Create Prisma migration to add email_verified column to users table,
following naming convention YYYYMMDDHHMMSS_add_email_verification.sql.
Include both up and down migrations."
LLM: *generates perfect migration*

Example 2: API Endpoint

Without exploration:

Dev: "Add GET /users/:id endpoint"
LLM: *generates endpoint*
*doesn't use controller pattern*
*missing authentication middleware*
*wrong error response format*

With exploration:

Dev: "How are API endpoints structured?"
LLM: "Controller -> Service -> Repository pattern..."

Dev: "What middleware do endpoints use?"
LLM: "authenticateUser, validateRequest, rateLimiter..."

Dev: "What's the error response format?"
LLM: "{ success: false, error: { code, message, details } }"

Dev: "Create GET /users/:id endpoint:
- UserController.getUser method
- Use UserService.findById
- Apply authenticateUser middleware
- Return error format: { success, error: { code, message, details } }
- Add input validation"

LLM: *generates correct endpoint*

Example 3: React Component

Without exploration:

Dev: "Create user profile component"
LLM: *generates class component*
*we use functional components with hooks*
*wrong styling approach*
*missing error boundaries*

With exploration:

Dev: "What component patterns do we use?"
LLM: "Functional components with hooks, TypeScript props..."

Dev: "How is styling handled?"
LLM: "Tailwind CSS with custom theme..."

Dev: "Show me a similar component"
LLM: *shows UserCard.tsx example*

Dev: "Create UserProfile component:
- Functional component with TypeScript props interface
- Use hooks: useState for loading, useUser for data
- Tailwind CSS styling matching theme
- Error boundary for failed data fetches
- Follow UserCard.tsx pattern"

LLM: *generates component matching all conventions*

Conclusion

LLMs are most effective when used in two distinct modes:

Exploration Mode (Ask questions, build understanding):

  • Understand existing patterns
  • Learn architectural decisions
  • Evaluate trade-offs
  • Anticipate issues
  • Build mental models

Implementation Mode (Generate code with informed prompts):

  • Reference discovered patterns
  • Provide architecture-aware prompts
  • Generate correct code on first try
  • Verify against mental model

Key Benefits:

  1. 60% fewer iterations: Correct code on first try
  2. 8x fewer pattern violations: Informed by exploration
  3. Faster development: Less generate-and-refine cycles
  4. Better learning: Understanding, not just code
  5. Lower costs: Fewer wasted tokens on incorrect code

The Rule: Explore first, implement second. Take 5 minutes to understand before generating any code. The upfront investment pays off immediately.

Related Concepts

Topics
ArchitectureBest PracticesCode GenerationConceptual UnderstandingExplorationImplementationLearning StrategyLlm WorkflowsMental ModelsPlanning

More Insights

Cover Image for Thought Leaders

Thought Leaders

People to follow for compound engineering, context engineering, and AI agent development.

James Phoenix
James Phoenix
Cover Image for Systems Thinking & Observability

Systems Thinking & Observability

Software should be treated as a measurable dynamical system, not as a collection of features.

James Phoenix
James Phoenix