Summary
LLMs learn project-specific patterns best through 2-3 concrete examples from your actual codebase rather than abstract explanations. By showing real implementations before requesting new code, the LLM mimics structure exactly, matching your conventions for dependency injection, error handling, validation, and documentation style.
The Problem
LLMs don’t inherently know project-specific patterns and conventions, leading to generic code that doesn’t match the existing codebase. Abstract instructions about patterns are often misinterpreted or ignored, requiring multiple revision cycles to achieve consistency.
The Solution
Provide 2-3 concrete examples from your actual codebase before asking for new code. The LLM learns by example, mimicking structure, patterns, and conventions directly from working implementations. This approach is more effective than lengthy explanations and ensures new code matches existing patterns.
The Problem
When you ask an LLM to generate code for your project, it doesn’t know your patterns:
- How do you structure services?
- Where does validation happen?
- What’s your error handling pattern?
- How do you inject dependencies?
- What documentation style do you use?
Without this knowledge, the LLM generates generic code that works in isolation but doesn’t fit your codebase:
// What the LLM generates (generic)
export class PostService {
constructor() {
this.db = new Database(); // Hard-coded dependency
}
async createPost(data) { // No types
return await this.db.insert('posts', data); // No validation, throws on error
}
}
// What you actually need (your pattern)
export const createPostService = (deps: PostServiceDeps): PostService => {
const { repository, validator } = deps;
return {
async createPost(data: CreatePostInput): Promise<Result<Post>> {
// Validate first
const validation = validator.validate(data);
if (!validation.success) {
return { success: false, errors: validation.errors };
}
// Repository handles data access
return await repository.create(validation.data);
},
};
};
The mismatch requires multiple revision cycles, wasting time and tokens.
Why Abstract Instructions Fail
You: “Use factory functions with dependency injection, validate inputs at the start of each method, return Result types instead of throwing exceptions, and add JSDoc comments.”
LLM: Interprets this vaguely and still generates inconsistent code:
// LLM's interpretation (still wrong)
export function createPostService(db: Database) { // Partial DI
return {
async createPost(data: any) { // Wrong: still uses 'any'
if (!data) throw new Error('Invalid'); // Wrong: still throws
return db.insert('posts', data); // Wrong: no validation pattern
}
};
}
Problem: Abstract concepts like “factory functions” and “dependency injection” have many valid implementations. The LLM can’t read your mind about which specific pattern you want.
The Solution: Few-Shot Prompting
Few-shot prompting means providing 2-3 concrete examples from your actual codebase before asking for new code.
How It Works
Instead of explaining patterns abstractly:
❌ "Use factory functions with dependency injection"
Show real examples:
✅ "Here are examples of how we implement services in this codebase:
Example 1: User Service
[Paste actual code from packages/domain/src/users/user-service.ts]
Example 2: Campaign Service
[Paste actual code from packages/domain/src/campaigns/campaign-service.ts]
Now create a Post Service following the same pattern."
Why This Works
LLMs are pattern matchers. When you show concrete examples:
- LLM identifies patterns across examples (factory function signature, validation flow, error handling)
- LLM extracts structure (what changes vs. what stays consistent)
- LLM applies pattern to new domain (posts instead of users/campaigns)
- LLM mimics exactly (same style, same conventions, same structure)
The result: Generated code that looks like it was written by the same developer who wrote the examples.
Implementation
Step 1: Identify Pattern Categories
What patterns do you want the LLM to learn?
Common pattern categories:
- Service layer: Business logic, dependency injection
- Repository layer: Data access, query patterns
- API routes: Request handling, validation, response formatting
- React components: Props, state management, event handling
- Utilities: Pure functions, error handling, type guards
- Tests: Test structure, mocking patterns, assertions
Step 2: Select Representative Examples
For each pattern category, choose 2-3 examples that:
- Demonstrate the pattern clearly (not edge cases)
- Show consistency (examples follow same structure)
- Cover typical use cases (common scenarios, not outliers)
- Are recent and maintained (reflect current conventions)
Good example selection:
// Example 1: Simple domain (User Service)
// Example 2: Medium complexity domain (Campaign Service)
// Example 3: [Optional] Complex domain (Report Service)
Step 3: Structure Your Prompt
Use this template:
# Pattern: [Category Name]
Here are examples of how we implement [category] in this codebase:
## Example 1: [Domain Name]
**File**: `path/to/first-example.ts`
[Full code from first example]
## Example 2: [Domain Name]
**File**: `path/to/second-example.ts`
[Full code from second example]
---
# Your Task
Now create a [new domain] [category] following the same pattern:
- [Specific requirement 1]
- [Specific requirement 2]
- [Specific requirement 3]
Step 4: Provide Examples + Request
Example: Service Layer Pattern
# Pattern: Service Layer
Here are examples of how we implement services in this codebase:
## Example 1: User Service
**File**: `packages/domain/src/users/user-service.ts`
```typescript
import { Result } from '../types/result';
import { UserRepository } from './user-repository';
import { UserValidator } from './user-validator';
export interface UserServiceDeps {
repository: UserRepository;
validator: UserValidator;
}
export interface UserService {
createUser(input: CreateUserInput): Promise<Result<User>>;
updateUser(id: string, input: UpdateUserInput): Promise<Result<User>>;
}
/**
* Creates a user service instance.
*
* @param deps - Dependencies for the service
* @returns UserService interface
*/
export const createUserService = (deps: UserServiceDeps): UserService => {
const { repository, validator } = deps;
return {
async createUser(input: CreateUserInput): Promise<Result<User>> {
// Validate input
const validation = validator.validateCreate(input);
if (!validation.success) {
return { success: false, errors: validation.errors };
}
// Check for duplicates
const existing = await repository.findByEmail(validation.data.email);
if (existing) {
return {
success: false,
errors: ['User with this email already exists'],
};
}
// Create user
return await repository.create(validation.data);
},
async updateUser(id: string, input: UpdateUserInput): Promise<Result<User>> {
// Validate input
const validation = validator.validateUpdate(input);
if (!validation.success) {
return { success: false, errors: validation.errors };
}
// Check user exists
const user = await repository.findById(id);
if (!user) {
return { success: false, errors: ['User not found'] };
}
// Update user
return await repository.update(id, validation.data);
},
};
};
Example 2: Campaign Service
File: packages/domain/src/campaigns/campaign-service.ts
import { Result } from '../types/result';
import { CampaignRepository } from './campaign-repository';
import { CampaignValidator } from './campaign-validator';
export interface CampaignServiceDeps {
repository: CampaignRepository;
validator: CampaignValidator;
}
export interface CampaignService {
createCampaign(input: CreateCampaignInput): Promise<Result<Campaign>>;
launchCampaign(id: string): Promise<Result<Campaign>>;
}
/**
* Creates a campaign service instance.
*
* @param deps - Dependencies for the service
* @returns CampaignService interface
*/
export const createCampaignService = (deps: CampaignServiceDeps): CampaignService => {
const { repository, validator } = deps;
return {
async createCampaign(input: CreateCampaignInput): Promise<Result<Campaign>> {
// Validate input
const validation = validator.validateCreate(input);
if (!validation.success) {
return { success: false, errors: validation.errors };
}
// Validate business rules
if (validation.data.endDate <= validation.data.startDate) {
return {
success: false,
errors: ['End date must be after start date'],
};
}
// Create campaign
return await repository.create(validation.data);
},
async launchCampaign(id: string): Promise<Result<Campaign>> {
// Check campaign exists
const campaign = await repository.findById(id);
if (!campaign) {
return { success: false, errors: ['Campaign not found'] };
}
// Validate campaign is ready to launch
if (campaign.status !== 'draft') {
return {
success: false,
errors: ['Campaign must be in draft status to launch'],
};
}
// Launch campaign
return await repository.update(id, { status: 'active' });
},
};
};
Your Task
Now create a Post Service following the same pattern:
- Factory function:
createPostService(deps: PostServiceDeps): PostService - Dependencies:
repository: PostRepository,validator: PostValidator - Methods:
createPost(input: CreatePostInput): Promise<Result<Post>>publishPost(id: string): Promise<Result<Post>>
- Validation at start of each method
- Return
Result<T>type (never throw exceptions) - JSDoc comments on factory function
- Business rule: Posts must have a title (min 10 chars) and content (min 50 chars)
**LLM output** (matches pattern exactly):
```typescript
import { Result } from '../types/result';
import { PostRepository } from './post-repository';
import { PostValidator } from './post-validator';
export interface PostServiceDeps {
repository: PostRepository;
validator: PostValidator;
}
export interface PostService {
createPost(input: CreatePostInput): Promise<Result<Post>>;
publishPost(id: string): Promise<Result<Post>>;
}
/**
* Creates a post service instance.
*
* @param deps - Dependencies for the service
* @returns PostService interface
*/
export const createPostService = (deps: PostServiceDeps): PostService => {
const { repository, validator } = deps;
return {
async createPost(input: CreatePostInput): Promise<Result<Post>> {
// Validate input
const validation = validator.validateCreate(input);
if (!validation.success) {
return { success: false, errors: validation.errors };
}
// Validate business rules
if (validation.data.title.length < 10) {
return {
success: false,
errors: ['Title must be at least 10 characters'],
};
}
if (validation.data.content.length < 50) {
return {
success: false,
errors: ['Content must be at least 50 characters'],
};
}
// Create post
return await repository.create(validation.data);
},
async publishPost(id: string): Promise<Result<Post>> {
// Check post exists
const post = await repository.findById(id);
if (!post) {
return { success: false, errors: ['Post not found'] };
}
// Validate post is ready to publish
if (post.status !== 'draft') {
return {
success: false,
errors: ['Post must be in draft status to publish'],
};
}
// Publish post
return await repository.update(id, { status: 'published' });
},
};
};
Result: Perfect pattern match on first try.
Optimal Number of Examples
Research and empirical testing show:
0 Examples (Zero-Shot)
Accuracy: ~40-60%
Problem: LLM guesses pattern based on general knowledge
Use when: Pattern is extremely common (e.g., “REST API endpoint”)
1 Example (One-Shot)
Accuracy: ~60-75%
Problem: LLM might treat example as special case, not generalize pattern
Use when: Pattern is simple and self-explanatory
2-3 Examples (Few-Shot)
Accuracy: ~85-95%
Benefit: LLM identifies what varies vs. what’s consistent across examples
Use when: You want reliable pattern matching (most cases)
4+ Examples (Many-Shot)
Accuracy: ~90-98% (marginal improvement over 2-3)
Problem: Consumes more tokens, slower generation, diminishing returns
Use when: Pattern is complex with many edge cases
The Sweet Spot: 2-3 Examples
Why 2-3 is optimal:
- Sufficient for pattern recognition: LLM sees consistency across multiple implementations
- Efficient token usage: Minimal context overhead
- Fast generation: Less input to process
- Diminishing returns: 4+ examples rarely improve accuracy significantly
Best Practices
1. Choose Examples That Show Variation
Examples should vary in domain/data but be consistent in pattern/structure:
✅ Good:
- Example 1: User Service (simple entity)
- Example 2: Campaign Service (complex entity with business rules)
❌ Bad:
- Example 1: User Service
- Example 2: User Service Helper (same domain, confusing)
2. Keep Examples Complete
Include full implementations, not snippets:
✅ Good: Full service file with imports, types, factory function, all methods
❌ Bad: Just the factory function signature
Reason: LLM needs to see how everything connects (imports, types, dependencies, implementation).
3. Annotate Key Patterns
Add inline comments highlighting the pattern:
export const createUserService = (deps: UserServiceDeps): UserService => {
const { repository, validator } = deps; // Pattern: Destructure deps at top
return {
async createUser(input: CreateUserInput): Promise<Result<User>> {
// Pattern: ALWAYS validate input first
const validation = validator.validateCreate(input);
if (!validation.success) {
// Pattern: Return error Result, NEVER throw
return { success: false, errors: validation.errors };
}
// Pattern: Business rule checks after validation
// ...
// Pattern: Repository call last
return await repository.create(validation.data);
},
};
};
4. Maintain Example Consistency
Ensure examples follow exactly the same pattern:
❌ Bad:
- Example 1: Uses factory function
- Example 2: Uses class constructor
✅ Good:
- Example 1: Uses factory function
- Example 2: Uses factory function (same pattern)
Inconsistent examples confuse the LLM about which pattern to follow.
5. Update Examples When Patterns Change
When you refactor patterns, update few-shot examples:
# CLAUDE.md - Few-Shot Examples
Last updated: 2025-11-02
## Service Layer Pattern (v2 - Updated Oct 2025)
[New examples reflecting current pattern]
~~## Service Layer Pattern (v1 - Deprecated)~~
6. Store Examples in CLAUDE.md Files
Don’t repeat examples in every prompt. Store them in hierarchical CLAUDE.md files:
packages/domain/CLAUDE.md:
# Domain Layer Patterns
## Service Pattern
[Examples of service implementations]
## Repository Pattern
[Examples of repository implementations]
Then reference: “Follow the service pattern shown in packages/domain/CLAUDE.md”
7. Combine with Explicit Requirements
Few-shot examples show how, but you still need to specify what:
# Examples
[Show 2-3 service examples]
# Your Task
Create a Post Service with:
- Methods: createPost, publishPost, deletePost
- Validation: title (min 10 chars), content (min 50 chars)
- Business rules: Only draft posts can be published
Common Pitfalls
❌ Pitfall 1: Examples Too Different
Problem: Examples vary in structure, confusing the LLM
// Example 1: Functional approach
export const createUserService = (deps) => ({ ... });
// Example 2: Class-based approach
export class CampaignService { ... }
Solution: Ensure all examples follow the same architectural pattern
❌ Pitfall 2: Examples Too Similar
Problem: All examples are trivial, LLM doesn’t learn complexity handling
// Example 1: User Service (1 method, no validation)
// Example 2: Post Service (1 method, no validation)
Solution: Include at least one example with realistic complexity (validation, business rules, error handling)
❌ Pitfall 3: Outdated Examples
Problem: Examples reflect old patterns, new code doesn’t match current codebase
Solution: Treat few-shot examples as living documentation, update during refactoring
❌ Pitfall 4: Too Many Examples
Problem: 5+ examples consume tokens without improving accuracy
Solution: Stick to 2-3 examples, use quality over quantity
❌ Pitfall 5: No Explicit Requirements
Problem: Examples alone don’t specify what the new code should do
Solution: Always combine examples with explicit task description
Integration with Other Patterns
Combine with Hierarchical CLAUDE.md
Store few-shot examples in domain-specific CLAUDE.md files:
root/
CLAUDE.md (architecture overview)
packages/
domain/
CLAUDE.md (service/repository examples)
src/
users/
user-service.ts (referenced as Example 1)
api/
CLAUDE.md (API route examples)
See: Hierarchical Context Patterns
Combine with One-Way Pattern Consistency
Few-shot examples establish patterns, one-way consistency enforces them:
- Define pattern with few-shot examples
- Generate new code using examples
- New code becomes example for future generations
- Pattern propagates consistently
Combine with Quality Gates
Few-shot examples reduce entropy, quality gates verify correctness:
Few-Shot Examples → Reduce entropy (narrow valid outputs)
Quality Gates → Filter remaining outputs (ensure correctness)
Result → Predictable, correct code
See: Quality Gates as Information Filters
Combine with Semantic Naming
Use semantic names in examples to teach naming conventions:
// Example teaches naming patterns:
// - Services: create{Entity}Service
// - Methods: verb + noun (createUser, updateCampaign)
// - Inputs: {Verb}{Entity}Input
// - Results: Result<{Entity}>
export const createUserService = (deps: UserServiceDeps): UserService => {
return {
async createUser(input: CreateUserInput): Promise<Result<User>> { ... },
async updateUser(id: string, input: UpdateUserInput): Promise<Result<User>> { ... },
};
};
See: Semantic Naming for Retrieval
Measuring Success
Metric 1: First-Try Accuracy
Track how often generated code matches pattern on first generation:
Without few-shot: 40% first-try match
With few-shot: 85% first-try match
Improvement: 2.1x
Metric 2: Revision Cycles
Count iterations needed to achieve correct pattern:
Without few-shot: 3-5 revision cycles
With few-shot: 1-2 revision cycles
Time saved: 60-70%
Metric 3: Pattern Consistency Score
Manually review generated code for pattern adherence:
Aspects to check:
- Uses factory function: ✓/✗
- Dependency injection: ✓/✗
- Validation at start: ✓/✗
- Returns Result type: ✓/✗
- JSDoc comments: ✓/✗
Score: 5/5 = 100% consistency
Metric 4: Token Efficiency
Compare tokens used for instructions vs. examples:
Abstract instructions: 500 tokens, 3-5 revision cycles = 2,500 total tokens
Few-shot examples: 800 tokens, 1-2 revision cycles = 1,600 total tokens
Savings: 36% fewer tokens
Real-World Example: API Route Pattern
Before (Abstract Instructions)
Prompt: “Create a tRPC endpoint for creating posts. Use input validation, error handling, and return proper types.”
Result: Generic endpoint that doesn’t match project conventions (3 revision cycles needed)
After (Few-Shot Examples)
Prompt:
Here are examples of how we implement tRPC endpoints:
## Example 1: Create User Endpoint
```typescript
import { z } from 'zod';
import { protectedProcedure } from '../trpc';
export const createUser = protectedProcedure
.input(
z.object({
email: z.string().email(),
name: z.string().min(2),
})
)
.mutation(async ({ input, ctx }) => {
const { userService } = ctx.services;
const result = await userService.createUser(input);
if (!result.success) {
throw new TRPCError({
code: 'BAD_REQUEST',
message: result.errors.join(', '),
});
}
return result.data;
});
Example 2: Launch Campaign Endpoint
import { z } from 'zod';
import { protectedProcedure } from '../trpc';
export const launchCampaign = protectedProcedure
.input(
z.object({
id: z.string().uuid(),
})
)
.mutation(async ({ input, ctx }) => {
const { campaignService } = ctx.services;
const result = await campaignService.launchCampaign(input.id);
if (!result.success) {
throw new TRPCError({
code: 'BAD_REQUEST',
message: result.errors.join(', '),
});
}
return result.data;
});
Now create a tRPC endpoint for creating posts:
- Input: title (string, min 10 chars), content (string, min 50 chars)
- Use protectedProcedure (requires authentication)
- Call ctx.services.postService.createPost
- Handle errors with TRPCError
**Result**: Perfect pattern match on first try (1 cycle instead of 3)
## Conclusion
Few-shot prompting with project examples is the **most effective way** to teach LLMs your codebase patterns:
**Key Benefits**:
1. **Higher accuracy**: 85-95% pattern match vs. 40-60% with abstract instructions
2. **Fewer revisions**: 1-2 cycles vs. 3-5 cycles
3. **Consistent code**: New code looks like existing code
4. **Faster development**: Less time explaining, more time building
5. **Better learning**: Concrete examples > abstract concepts
**Implementation Checklist**:
- ✅ Identify pattern categories (services, repositories, API routes, etc.)
- ✅ Select 2-3 representative examples per pattern
- ✅ Ensure examples are consistent in structure
- ✅ Store examples in hierarchical CLAUDE.md files
- ✅ Combine examples with explicit requirements
- ✅ Update examples when patterns evolve
- ✅ Measure success (first-try accuracy, revision cycles)
**The result**: LLM-generated code that **matches your codebase exactly**, reducing review time and increasing development velocity.
## Related Concepts
- [Chain-of-Thought Prompting](./chain-of-thought-prompting.md) - Structured reasoning with examples
- [Declarative Constraints Prompting](./constraint-based-prompting.md) - Combine examples with declarative constraints
- [Multi-Step Prompt Workflows](./multi-step-prompt-workflows.md) - Use examples at each workflow step
- [Layered Prompts Architecture](./layered-prompts-architecture.md) - Store examples in Domain or Application layers
- [Progressive Disclosure Context](./progressive-disclosure-context.md) - Load examples progressively as needed
- [Explicit Constraints and Non-Goals](./constraint-based-prompting.md) - Combine examples with clear scope boundaries
- [Hierarchical Context Patterns](./hierarchical-context-patterns.md) - Where to store few-shot examples
- [Quality Gates as Information Filters](./quality-gates-as-information-filters.md) - Verifying pattern adherence
- [Semantic Naming Patterns](./semantic-naming-patterns.md) - Teaching naming conventions through examples
- [Prompt Caching Strategy](./prompt-caching-strategy.md) - Cache few-shot examples for 90% cost reduction
## References
- [Few-Shot Learning in LLMs - OpenAI Research](https://arxiv.org/abs/2005.14165) - Research paper on few-shot learning capabilities of large language models
- [Prompt Engineering Guide - Few-Shot Prompting](https://www.promptingguide.ai/techniques/fewshot) - Comprehensive guide to few-shot prompting techniques

