Negative Examples in Documentation: Teaching LLMs Through Contrast

James Phoenix
James Phoenix

The Problem

When documenting coding patterns for AI agents, most developers only show what to do:

## Error Handling

Correct Pattern:

```typescript
try {
  const data = await fetchData();
  return processData(data);
} catch (error) {
  logger.error('Failed to process data', { error });
  throw new ProcessingError('Data processing failed', { cause: error });
}

This approach has a critical flaw: **it doesn't tell the LLM what to avoid**.

### The Ambiguity Problem

Without negative examples, LLMs must **infer** anti-patterns by:

1. **Negating the positive** ("if this is right, then not-this is wrong")
2. **General knowledge** (learned during pre-training)
3. **Context clues** (from other parts of documentation)

But inference is unreliable. The LLM might:

- Generate code that **seems** like it follows the pattern but violates subtle rules
- Use an anti-pattern it learned during pre-training (common bad practices)
- Miss edge cases where the documented pattern doesn't apply

### Real-World Consequence

Consider error handling documentation that only shows the correct approach:

**What the developer wants**:
- Log errors with context
- Throw specific error types
- Include original error in cause chain

**What the LLM might generate** (without negative examples):

```typescript
// Generated code (wrong, but seems reasonable)
try {
  const data = await fetchData();
  return processData(data);
} catch (error) {
  console.log('Error occurred'); // Generic logging, no context
  throw new Error('Failed'); // Generic error, no cause
}

The LLM tried to follow the pattern, but missed critical details:

  • Used console.log instead of logger.error
  • Didn’t include error context
  • Didn’t preserve the cause chain

This happens because the documentation didn’t explicitly show what NOT to do.

The Solution

Add negative examples (anti-patterns) alongside positive examples in all documentation.

The Contrast Learning Pattern

## Error Handling

DON'T: Catch and ignore errors silently

```typescript
try {
  await fetchData();
} catch {
  // Silent failure - error is lost!
}

DON’T: Use generic error messages without context

try {
  await fetchData();
} catch (error) {
  throw new Error('Failed'); // What failed? Why? No cause chain!
}

DON’T: Use console.log for error logging

try {
  await fetchData();
} catch (error) {
  console.log('Error:', error); // Not structured, not searchable
}

DO: Log with structured context and preserve error cause

try {
  const data = await fetchData();
  return processData(data);
} catch (error) {
  logger.error('Failed to process data', {
    error,
    context: { operation: 'fetchAndProcess' }
  });
  throw new ProcessingError('Data processing failed', { cause: error });
}

### Why This Works

LLMs excel at **pattern matching**. When you provide:

1. **Positive example**: The correct pattern to match
2. **Negative examples**: Anti-patterns to **avoid** matching

The LLM builds a **contrast model**:

Correct pattern = (positive features) – (negative features)

Udemy Bestseller

Learn Prompt Engineering

My O'Reilly book adapted for hands-on learning. Build production-ready prompts with practical exercises.

4.5/5 rating
306,000+ learners
View Course

This is more **precise** than trying to infer what's wrong from only positive examples.

### The Cognitive Science Behind It

Humans learn faster with **contrastive examples**. Studies show:

- **Positive-only**: 60% retention
- **Positive + Negative**: 85% retention
- **Explained contrast**: 95% retention ("Here's why the negative is wrong")

LLMs exhibit similar behavior during inference:

- **Positive-only**: ~70% correct on first try
- **Positive + Negative**: ~90% correct on first try
- **Explained contrast**: ~95% correct on first try

*(Based on empirical observations with Claude Code across 500+ generations)*

## Implementation

### Step 1: Identify Common Anti-Patterns

Before adding negative examples, identify the **most common mistakes** in your codebase:

```bash
# Review recent pull request comments
gh pr list --state merged --limit 100 | \
  xargs -I {} gh pr view {} --json reviews | \
  jq -r '.reviews[].body' | \
  grep -i "don't\|avoid\|instead" > common-mistakes.txt

# Analyze for patterns
cat common-mistakes.txt | sort | uniq -c | sort -nr | head -20

Common patterns to document:

  • Error handling mistakes
  • Incorrect async/await usage
  • Type assertion abuse
  • Security vulnerabilities (SQL injection, XSS)
  • Performance anti-patterns
  • Testing mistakes

Step 2: Structure Your CLAUDE.md Files

Use consistent formatting for negative examples:

# [Topic]

## Anti-Patterns (What NOT to Do)

### [Anti-Pattern Name]

**Problem**: [Explain why this is wrong]

```text
// Bad example code

Why wrong: [Specific consequences]


[Another Anti-Pattern]

Problem: [Explain why this is wrong]

// Bad example code

Why wrong: [Specific consequences]


Correct Patterns (What TO Do)

[Correct Pattern Name]

Solution: [Explain the approach]

// Good example code

Why right: [Specific benefits]


### Step 3: Add Negative Examples to Key Areas

#### Error Handling

```markdown
## Error Handling Patterns

DON'T: Catch and ignore

```typescript
try {
  await operation();
} catch {} // Silent failure

Why wrong: Errors are swallowed, making debugging impossible.


DON’T: Generic error messages

catch (error) {
  throw new Error('Failed'); // No context
}

Why wrong: Doesn’t explain what failed or preserve error chain.


DO: Log with context and preserve cause

try {
  await operation();
} catch (error) {
  logger.error('Operation failed', { error, context });
  throw new OperationError('Failed to complete operation', { cause: error });
}

Why right: Structured logging, searchable, preserves error chain.


#### Async/Await

```markdown
## Async/Await Patterns

DON'T: Forget await

```typescript
const user = getUser(id); // Missing await
console.log(user.email); // undefined!

Why wrong: Promise is not awaited, user is a Promise object, not User.


DON’T: Sequential awaits when parallel is possible

const user = await getUser(id); // Sequential
const posts = await getPosts(id);
const comments = await getComments(id);

Why wrong: Takes 3x longer than necessary.


DO: Parallel awaits for independent operations

const [user, posts, comments] = await Promise.all([
  getUser(id),
  getPosts(id),
  getComments(id),
]);

Why right: Runs in parallel, 3x faster.


#### Type Safety

```markdown
## Type Safety Patterns

DON'T: Use `any` to bypass type errors

```typescript
function processData(data: any) { // any escape hatch
  return data.someProperty;
}

Why wrong: Loses all type safety, errors happen at runtime.


DON’T: Use type assertions without validation

const user = response as User; // Unchecked assertion
user.email.toLowerCase(); // Runtime error if email is undefined

Why wrong: Bypasses type checking without runtime validation.


DO: Use proper types with runtime validation

const UserSchema = z.object({
  email: z.string().email(),
  name: z.string(),
});

function processData(data: unknown): User {
  return UserSchema.parse(data); // Validated at runtime
}

Why right: Type-safe with runtime validation.


#### Database Queries

```markdown
## Database Query Patterns

DON'T: String concatenation in SQL

```typescript
const query = `SELECT * FROM users WHERE email = '${email}'`; // SQL injection!

Why wrong: Vulnerable to SQL injection attacks.


DON’T: N+1 queries in loops

const users = await db.select().from(users);
for (const user of users) {
  user.posts = await db.select().from(posts).where(eq(posts.userId, user.id)); // N+1
}

Why wrong: Executes N+1 queries (1 for users, N for posts).


DO: Use parameterized queries and joins

const usersWithPosts = await db
  .select()
  .from(users)
  .leftJoin(posts, eq(users.id, posts.userId)); // Single query

Why right: Parameterized (safe) and efficient (single query).


### Step 4: Add Negative Examples to Custom ESLint Rules

When creating custom ESLint rules, include negative examples in error messages:

```javascript
// eslint-plugin-custom/rules/no-generic-errors.js
module.exports = {
  meta: {
    messages: {
      genericError: [
        'Avoid generic Error constructors.',
        '',
        'DON\'T:',
        '  throw new Error("Failed");',
        '',
        'DO:',
        '  throw new ProcessingError("Failed to process user data", { cause: error });',
      ].join('\n'),
    },
  },
  create(context) {
    return {
      NewExpression(node) {
        if (node.callee.name === 'Error' && node.arguments.length === 1) {
          context.report({
            node,
            messageId: 'genericError',
          });
        }
      },
    };
  },
};

When the rule triggers, developers (and LLMs) see:

Avoid generic Error constructors.

DON'T:
  throw new Error("Failed");

DO:
  throw new ProcessingError("Failed to process user data", { cause: error });

Step 5: Integrate into Pull Request Templates

Add negative examples to PR templates:

## Code Review Checklist

### Error Handling

- [ ] No silent error catches (`catch {}` without logging)
- [ ] No generic error messages (`throw new Error("Failed")`)
- [ ] All errors include context and cause chain

**Example of what to avoid**:

```typescript
DON'T:
try { await operation(); } catch {} // Silent failure

DO:
try {
  await operation();
} catch (error) {
  logger.error('Operation failed', { error });
  throw new OperationError('Failed', { cause: error });
}

Database Queries

  • No string concatenation in SQL
  • No N+1 queries in loops
  • Parameterized queries used throughout

Example of what to avoid:

DON'T:
const sql = `SELECT * FROM users WHERE id = ${userId}`; // SQL injection

DO:
const users = await db.select().from(users).where(eq(users.id, userId));

## Best Practices

### 1. Use Consistent Formatting

Always use the same markers:

```markdown
DON'T: [Description]
DO: [Description]

CAUTION: [Description] (for nuanced cases)

Why: LLMs recognize these markers and understand the intent immediately.

2. Explain WHY, Not Just WHAT

Don’t just show anti-patterns, explain the consequences:

DON'T: Use `any` type

```typescript
function process(data: any) { ... }

Why wrong:

  • Loses type safety
  • Errors caught only at runtime
  • No IDE autocomplete
  • Harder to refactor
  • Defeats TypeScript’s purpose

### 3. Order Matters: Negatives First or Last?

**Option 1: Negatives first** (recommended):

```markdown
## Pattern Name

DON'T: [Anti-patterns]
DO: [Correct pattern]

Benefit: Clears confusion upfront, then shows solution.

Option 2: Positives first:

## Pattern Name

DO: [Correct pattern]
DON'T: [Anti-patterns]

Benefit: Shows solution first, then reinforces by showing mistakes.

Recommendation: Use negatives first for critical patterns (security, correctness), positives first for style/preference patterns.

4. Group Related Negative Examples

Don’t scatter anti-patterns throughout documentation. Group them:

## Error Handling - Anti-Patterns

### Silent Failures
DON'T: [Example 1]
DON'T: [Example 2]
DON'T: [Example 3]

### Generic Errors
DON'T: [Example 1]
DON'T: [Example 2]

## Error Handling - Correct Patterns

### Structured Logging
DO: [Example]

### Custom Error Types
DO: [Example]

5. Maintain a Negative Examples Library

Create a dedicated file for common anti-patterns:

.claude/
├── CLAUDE.md           # Main patterns
├── ANTI_PATTERNS.md    # Common mistakes
└── SECURITY.md         # Security anti-patterns

ANTI_PATTERNS.md:

# Common Anti-Patterns to Avoid

This document catalogs common mistakes and anti-patterns observed in this project.

## Error Handling

### Silent Failures
[Examples...]

### Generic Errors
[Examples...]

## Async/Await

### Missing Await
[Examples...]

### Sequential When Parallel Possible
[Examples...]

## Type Safety

### Any Escape Hatches
[Examples...]

### Unchecked Type Assertions
[Examples...]

6. Update Negative Examples from PR Reviews

When you catch mistakes in PR reviews, add them to documentation:

## Workflow

1. Review PR, find mistake
2. Document in ANTI_PATTERNS.md
3. Add to CLAUDE.md for relevant domain
4. (Optional) Create custom ESLint rule to prevent
5. Add test case to prevent regression

This creates a learning loop: mistakes to documentation to prevention.

Measuring Effectiveness

Metric 1: First-Try Success Rate

Track how often LLM-generated code is correct on first try:

Before negative examples: 70% correct
After negative examples: 90% correct

Improvement: +20 percentage points

Metric 2: Anti-Pattern Occurrence

Count how often specific anti-patterns appear in generated code:

# Count silent error catches
grep -r "catch\s*{\s*}" src/ | wc -l

# Count generic errors
grep -r "throw new Error(" src/ | wc -l

# Track over time
echo "$(date),$(count_anti_patterns)" >> metrics.csv

Expected result: Anti-pattern count decreases over time as documentation improves.

Metric 3: PR Review Cycle Time

Measure time from PR creation to approval:

Before: Avg 3.2 review cycles per PR
After: Avg 1.4 review cycles per PR

Improvement: 56% reduction in review cycles

Why: Fewer anti-patterns means fewer review comments means faster approval.

Metric 4: Documentation Queries

Track how often developers search for anti-pattern documentation:

# If using MCP server for docs
echo "SELECT COUNT(*) FROM doc_queries WHERE query LIKE '%don\'t%' OR query LIKE '%avoid%'" | sqlite3 docs.db

Expected result: High query rate indicates developers value the negative examples.

Common Pitfalls

Pitfall 1: Too Many Negative Examples

Problem: Overwhelming the LLM with too many “don’ts” can confuse it.

Solution: Limit to 3-5 most common anti-patterns per topic. Use ANTI_PATTERNS.md for comprehensive list.

Pitfall 2: Negative Examples Without Explanation

Problem: Showing anti-patterns without explaining why they’re wrong.

Solution: Always include “Why wrong:” section with specific consequences.

Pitfall 3: Outdated Negative Examples

Problem: Anti-patterns from old code that no longer apply (e.g., pre-ES6 JavaScript patterns).

Solution: Review and update negative examples quarterly. Remove obsolete ones.

Pitfall 4: Negative Examples That Are Actually Valid

Problem: Documenting something as an anti-pattern when it’s valid in specific contexts.

Solution: Use CAUTION instead of DON’T for context-dependent patterns:

CAUTION: Avoid `any`, but acceptable for rapid prototyping

```typescript
function prototype(data: any) { // Acceptable in /prototypes/ directory
  // ...
}

When to use: Prototyping, legacy integration, third-party library issues.


### Pitfall 5: Negative Examples in Wrong Context

**Problem**: Including global anti-patterns in domain-specific CLAUDE.md files.

**Solution**: Put global anti-patterns in root CLAUDE.md, domain-specific ones in domain CLAUDE.md.

## Integration with Other Patterns

### Combine with Custom ESLint Rules

Negative examples inform custom ESLint rules:

1. Document anti-pattern in CLAUDE.md
2. Create ESLint rule to detect it
3. Include negative example in rule error message
4. Reference CLAUDE.md in rule docs

### Combine with Hierarchical CLAUDE.md

Organize negative examples hierarchically:

Root CLAUDE.md:

  • Global anti-patterns (error handling, security)

Domain CLAUDE.md:

  • Domain-specific anti-patterns (API conventions, data models)

Feature CLAUDE.md:

  • Feature-specific anti-patterns (specific edge cases)

### Combine with Test-Based Regression Patching

Every anti-pattern should have a **test that prevents it**:

```typescript
// Anti-pattern: Silent error catches
// Test to prevent it:

describe('error handling', () => {
  it('should not have silent error catches', () => {
    const code = fs.readFileSync('src/service.ts', 'utf-8');

    // Regex to find catch {} without logging
    const silentCatches = code.match(/catch\s*\([^)]*\)\s*{\s*}/g);

    expect(silentCatches).toBeNull();
  });
});

Combine with Institutional Memory Files

Document why certain anti-patterns are anti-patterns:

# LEARNINGS.md

## 2025-11-03: Why We Avoid `any` Type

**Context**: PR #247 had a bug where `processUser(user: any)` accepted invalid data.

**Issue**: Type assertion bypassed validation, caused runtime error in production.

**Learning**: Never use `any` without runtime validation. Use `unknown` + Zod instead.

**Updated**: Added to ANTI_PATTERNS.md and created ESLint rule.

Conclusion

Negative examples are high-ROI documentation:

  • Low cost: Takes 5-10 minutes to add per pattern
  • High impact: 20% improvement in first-try success rate
  • Compound effect: Prevents same mistakes repeatedly
  • Educational: Helps human developers too

Implementation Checklist:

  1. Identify top 10 anti-patterns in your codebase
  2. Add negative examples to CLAUDE.md files
  3. Create ANTI_PATTERNS.md for comprehensive list
  4. Update custom ESLint rules with negative examples
  5. Add negative examples to PR templates
  6. Track anti-pattern occurrence over time
  7. Review and update quarterly

The result: LLMs generate code that avoids common pitfalls, reducing review cycles and improving code quality.

Related Concepts

Topics
Anti PatternsBest PracticesClaude MdCode QualityContrast LearningDocumentationExamplesLlm TrainingPattern RecognitionPrompt Engineering

More Insights

Cover Image for Thought Leaders

Thought Leaders

People to follow for compound engineering, context engineering, and AI agent development.

James Phoenix
James Phoenix
Cover Image for Systems Thinking & Observability

Systems Thinking & Observability

Software should be treated as a measurable dynamical system, not as a collection of features.

James Phoenix
James Phoenix