The Problem
Monorepos are powerful: shared code, consistent tooling, atomic commits across services. But they have a hidden cost: applying changes across packages is painfully slow.
The Sequential Bottleneck
Consider this common scenario:
“Update
@company/loggerfrom v2 to v3 across all 20 packages in our monorepo. The API changed—replacelogger.log()withlogger.info().”
With a traditional sequential approach:
# Package 1: api-gateway
Claude: Updating api-gateway...
- Read package.json
- Update dependency version
- Find all logger.log() calls
- Replace with logger.info()
- Run tests
Complete (8 minutes)
# Package 2: user-service
Claude: Updating user-service...
- Read package.json
- Update dependency version
- Find all logger.log() calls
- Replace with logger.info()
- Run tests
Complete (7 minutes)
# Package 3: payment-service
Claude: Updating payment-service...
[... 18 more packages ...]
Total time: 20 packages x 7.5 min avg = 150 minutes (2.5 hours)
Why LLMs Struggle with “Do This Everywhere”
When you ask an LLM to “update all packages,” it faces several challenges:
1. Context Overload
// LLM receives:
- 20 package.json files
- 200+ source files across packages
- Shared dependencies
- Package-specific configurations
- Cross-package imports
// Context window: ~200K tokens
// Required context: ~500K tokens
// Result: Missing packages, incomplete updates
2. Inconsistent Application
LLMs may apply the pattern differently across packages:
// Package A: Correctly updated
import { logger } from '@company/logger';
logger.info('User created');
// Package B: Missed some calls
import { logger } from '@company/logger';
logger.log('Payment processed'); // Should be logger.info()
logger.info('Invoice sent');
// Package C: Wrong pattern
import { logger } from '@company/logger';
console.log('Order created'); // Should use logger.info()
3. Lost Context
As the LLM processes more packages, it forgets earlier decisions:
Package 1-5: Carefully updates each file
Package 6-10: Starts missing edge cases
Package 11-15: Applies inconsistent patterns
Package 16-20: Rushes through, introduces bugs
4. Error Accumulation
Errors in early packages can cascade:
Package 3: Introduces breaking change
Package 7: Depends on Package 3, now broken
Package 12: Depends on Package 7, now broken
Result: 3 packages broken, 2 hours of debugging
Real-World Example
Scenario: Update tRPC from v10.0 to v10.45 across 15 packages
Sequential approach:
- Time: 2 hours 15 minutes
- Errors: 4 packages had incorrect router imports
- Missed: 2 packages still using deprecated
.query()syntax - Developer frustration: High (babysitting LLM for 2+ hours)
Parallel approach:
- Time: 18 minutes
- Errors: 0 (each agent had focused context)
- Coverage: 100% (explicit agent per package)
- Developer experience: Excellent (fire and forget)
The Solution
Spawn parallel agents—one per package—using Claude Code’s agent SDK or similar tooling.
How It Works
// pseudo-code for parallel agent orchestration
const packages = [
'packages/api-gateway',
'packages/user-service',
'packages/payment-service',
'packages/notification-service',
// ... 16 more packages
];
const task = `
Update @company/logger from v2 to v3.
Changes required:
1. Update package.json dependency
2. Replace logger.log() with logger.info()
3. Replace logger.error() with logger.error() (no change, but verify usage)
4. Run tests to verify
Context: You are working on a single package. Focus only on this package.
`;
// Spawn parallel agents
const agents = packages.map(packagePath => {
return spawnAgent({
name: `update-logger-${path.basename(packagePath)}`,
workingDirectory: packagePath,
task: task,
context: {
files: [
`${packagePath}/package.json`,
`${packagePath}/src/**/*.ts`,
`${packagePath}/CLAUDE.md`, // Package-specific context
],
},
});
});
// Wait for all agents to complete
const results = await Promise.all(agents.map(agent => agent.waitForCompletion()));
// Aggregate results
const summary = {
total: packages.length,
succeeded: results.filter(r => r.status === 'success').length,
failed: results.filter(r => r.status === 'failed').length,
duration: Math.max(...results.map(r => r.durationMs)),
};
console.log(`Updated ${summary.succeeded}/${summary.total} packages in ${summary.duration}ms`);
Key Benefits
1. Focused Context
Each agent receives only the context it needs:
Agent 1 (api-gateway):
- Context: packages/api-gateway/**
- Size: 5K tokens
- Clarity: 100% (no other packages to confuse)
Agent 2 (user-service):
- Context: packages/user-service/**
- Size: 4.2K tokens
- Clarity: 100%
[... etc for all packages ...]
Versus sequential approach:
Single Agent (all packages):
- Context: packages/** (all 20 packages)
- Size: 80K tokens
- Clarity: 40% (confusion between packages)
2. Parallel Execution
With 20 packages and 8-minute average per package:
Sequential: 20 x 8min = 160 minutes
Parallel: max(8min, 7min, 9min, ...) = 9 minutes (slowest agent)
Speedup: 160 / 9 = 17.7x
Practical speedup: ~10x (accounting for overhead)
3. Consistent Application
Each agent follows the same instructions with no context drift:
// All agents receive identical task
const task = `
1. Update package.json: "@company/logger": "^3.0.0"
2. Find all instances of logger.log() and replace with logger.info()
3. Verify no logger.debug() calls (deprecated in v3)
4. Run: npm test
5. Report any failures
`;
// Result: Consistent changes across all packages
4. Error Isolation
If one agent fails, others continue:
api-gateway: Updated successfully
user-service: Updated successfully
payment-service: Tests failed (unrelated issue)
notification-service: Updated successfully
analytics-service: Updated successfully
...
Result: 19/20 packages updated, 1 isolated failure to debug
When to Use Parallel Agents
This pattern works best when:
1. Task is Identical Across Packages
Good:
- Update dependency version
- Refactor API calls (e.g.,
old.method()tonew.method()) - Add new linting rule and fix violations
- Migrate from deprecated API to new API
- Add missing type annotations
Not ideal:
- “Improve error handling” (too vague, requires different solutions per package)
- “Optimize performance” (different bottlenecks per package)
- “Refactor architecture” (requires coordination between packages)
2. Packages are Independent
Good:
- Microservices in a monorepo (loosely coupled)
- Independent libraries (e.g., UI component packages)
- Tools and utilities (e.g., linters, formatters)
Not ideal:
- Tightly coupled packages (changes require coordination)
- Shared state between packages
- Packages that import from each other (circular dependencies)
3. Clear Success Criteria
Good:
- “Tests pass after update”
- “Linter reports no errors”
- “TypeScript compiles successfully”
- “All logger.log() replaced with logger.info()”
Not ideal:
- “Code looks better”
- “Performance improves”
- “Architecture is cleaner”
Implementation
Step 1: Identify Packages
Use a simple script to list packages:
// scripts/list-packages.ts
import { readdir } from 'fs/promises';
import { join } from 'path';
const MONOREPO_ROOT = process.cwd();
const PACKAGES_DIR = join(MONOREPO_ROOT, 'packages');
async function listPackages(): Promise<string[]> {
const entries = await readdir(PACKAGES_DIR, { withFileTypes: true });
return entries
.filter(entry => entry.isDirectory())
.map(entry => join(PACKAGES_DIR, entry.name));
}
// Usage
const packages = await listPackages();
console.log(packages);
// [
// '/path/to/monorepo/packages/api-gateway',
// '/path/to/monorepo/packages/user-service',
// ...
// ]
Or use existing monorepo tools:
# pnpm workspaces
pnpm list -r --depth -1 --json | jq -r '.[].path'
# npm workspaces
npm query .workspace | jq -r '.[].location'
# yarn workspaces
yarn workspaces list --json | jq -r '.location'
# lerna
lerna list --json | jq -r '.[].location'
Step 2: Define the Task
Write a clear, focused task description:
# Task: Update @company/logger v2 -> v3
## Context
You are updating a single package in our monorepo. Focus only on this package.
## Changes Required
1. **Update dependency**
- In `package.json`, change `"@company/logger": "^2.0.0"` to `"@company/logger": "^3.0.0"`
2. **Update import statements**
- Old: `import { Logger } from '@company/logger';`
- New: `import { createLogger } from '@company/logger';`
- Then: `const logger = createLogger({ service: 'package-name' });`
3. **Update method calls**
- Replace `logger.log(message)` with `logger.info(message)`
- Replace `logger.debug(message)` with `logger.debug(message)` (no change)
- Replace `logger.warning(message)` with `logger.warn(message)` (note: warning -> warn)
4. **Verify changes**
- Run `npm test` to ensure all tests pass
- Check that no deprecation warnings appear
## Expected Files to Modify
- `package.json`
- Any `.ts` or `.js` files in `src/` that import `@company/logger`
## Success Criteria
- All tests pass
- No references to old API remain
- Package builds successfully
Step 3: Spawn Agents (Manual Approach)
In Claude Code, spawn agents manually:
# In Claude Code chat:
User: Spawn parallel agents to update @company/logger across all packages.
Packages:
- packages/api-gateway
- packages/user-service
- packages/payment-service
- packages/notification-service
- packages/analytics-service
For each package, create an agent with this task:
[paste task from Step 2]
Spawn all agents in parallel.
Claude Code will create multiple agent tasks simultaneously.
Step 4: Spawn Agents (Automated Approach)
For more control, use the Agent SDK directly:
// scripts/parallel-update.ts
import { spawnAgent, AgentConfig } from '@anthropic/agent-sdk';
import { readdir } from 'fs/promises';
import { join } from 'path';
const TASK_TEMPLATE = `
# Task: Update @company/logger v2 -> v3
[... task description from Step 2 ...]
`;
async function updateAllPackages() {
// Get all packages
const packagesDir = join(process.cwd(), 'packages');
const packages = (await readdir(packagesDir, { withFileTypes: true }))
.filter(entry => entry.isDirectory())
.map(entry => join(packagesDir, entry.name));
console.log(`Found ${packages.length} packages`);
console.log(`Spawning ${packages.length} parallel agents...`);
// Spawn agents
const agents = packages.map(async (packagePath) => {
const packageName = packagePath.split('/').pop()!;
const config: AgentConfig = {
name: `update-logger-${packageName}`,
workingDirectory: packagePath,
task: TASK_TEMPLATE,
timeout: 600000, // 10 minutes
};
console.log(` - Spawning agent for ${packageName}...`);
return spawnAgent(config);
});
// Wait for all agents
const results = await Promise.allSettled(agents);
// Aggregate results
const succeeded = results.filter(r => r.status === 'fulfilled').length;
const failed = results.filter(r => r.status === 'rejected').length;
console.log(`\nSuccess: ${succeeded}/${packages.length}`);
console.log(`Failed: ${failed}/${packages.length}`);
// Report failures
results.forEach((result, index) => {
if (result.status === 'rejected') {
const packageName = packages[index].split('/').pop();
console.error(`\n${packageName} failed:`);
console.error(result.reason);
}
});
}
updateAllPackages();
Run it:
npx tsx scripts/parallel-update.ts
Step 5: Verify Results
After agents complete, verify changes:
# Check that all packages updated
git status
# Expected output:
# modified: packages/api-gateway/package.json
# modified: packages/api-gateway/src/logger.ts
# modified: packages/user-service/package.json
# modified: packages/user-service/src/logger.ts
# ...
# Run tests across all packages
npm run test:all
# or
pnpm run -r test
# or
yarn workspaces run test
# Check for any remaining old API usage
rg "logger\.log\(" packages/
# Should return no results
rg "logger\.warning\(" packages/
# Should return no results (should be logger.warn)
Step 6: Review and Commit
# Review changes
git diff packages/
# Stage all changes
git add packages/
# Commit with descriptive message
git commit -m "Update @company/logger from v2 to v3 across all packages
- Update dependency to ^3.0.0
- Replace Logger import with createLogger
- Update logger.log() -> logger.info()
- Update logger.warning() -> logger.warn()
- All tests passing
Updated via parallel agents (20 packages in 18 minutes)"
Advanced Patterns
Pattern 1: Staged Rollout
Update packages in batches to reduce risk:
// scripts/staged-update.ts
const packages = await listPackages();
// Batch 1: Low-risk packages (tools, utilities)
const batch1 = packages.filter(pkg =>
pkg.includes('utils') || pkg.includes('tools')
);
await updatePackages(batch1);
await verifyBatch(batch1);
// Batch 2: Internal services
const batch2 = packages.filter(pkg =>
pkg.includes('service') && !pkg.includes('api-gateway')
);
await updatePackages(batch2);
await verifyBatch(batch2);
// Batch 3: Critical services (api-gateway)
const batch3 = packages.filter(pkg =>
pkg.includes('api-gateway')
);
await updatePackages(batch3);
await verifyBatch(batch3);
Pattern 2: Dependency-Aware Ordering
Update packages in dependency order:
// scripts/dependency-order.ts
import { readFile } from 'fs/promises';
import { join } from 'path';
async function getPackageDependencies(packagePath: string): Promise<string[]> {
const packageJson = JSON.parse(
await readFile(join(packagePath, 'package.json'), 'utf-8')
);
const deps = {
...packageJson.dependencies,
...packageJson.devDependencies,
};
return Object.keys(deps).filter(dep => dep.startsWith('@company/'));
}
async function topologicalSort(packages: string[]): Promise<string[]> {
// Build dependency graph
const graph = new Map<string, string[]>();
for (const pkg of packages) {
const deps = await getPackageDependencies(pkg);
graph.set(pkg, deps);
}
// Topological sort (packages with no internal deps first)
const sorted: string[] = [];
const visited = new Set<string>();
function visit(pkg: string) {
if (visited.has(pkg)) return;
visited.add(pkg);
const deps = graph.get(pkg) || [];
for (const dep of deps) {
visit(dep);
}
sorted.push(pkg);
}
for (const pkg of packages) {
visit(pkg);
}
return sorted;
}
// Update packages in dependency order
const packages = await listPackages();
const sortedPackages = await topologicalSort(packages);
// Now update in batches, respecting dependencies
for (let i = 0; i < sortedPackages.length; i += 5) {
const batch = sortedPackages.slice(i, i + 5);
await updatePackages(batch); // 5 at a time
}
Pattern 3: Dry Run Mode
Test the update on a few packages first:
// scripts/dry-run.ts
const packages = await listPackages();
// Dry run on 2 packages
const testPackages = packages.slice(0, 2);
console.log('Dry run on test packages:');
console.log(testPackages);
const results = await updatePackages(testPackages);
if (results.every(r => r.status === 'success')) {
console.log('Dry run successful! Proceeding with all packages.');
await updatePackages(packages);
} else {
console.error('Dry run failed. Fix issues before proceeding.');
process.exit(1);
}
Pattern 4: Rollback on Failure
Automatically rollback if any agent fails:
// scripts/update-with-rollback.ts
import { execSync } from 'child_process';
const packages = await listPackages();
// Save current state
const originalBranch = execSync('git branch --show-current').toString().trim();
execSync('git stash');
execSync('git checkout -b parallel-update-temp');
try {
// Update packages
const results = await updatePackages(packages);
// Check results
const failed = results.filter(r => r.status === 'failed');
if (failed.length > 0) {
throw new Error(`${failed.length} packages failed to update`);
}
// Verify all tests pass
execSync('npm run test:all');
// Success! Merge changes
execSync(`git checkout ${originalBranch}`);
execSync('git merge parallel-update-temp');
execSync('git branch -d parallel-update-temp');
console.log('Update successful and merged!');
} catch (error) {
// Rollback
console.error('Update failed. Rolling back...');
execSync(`git checkout ${originalBranch}`);
execSync('git branch -D parallel-update-temp');
execSync('git stash pop');
throw error;
}
Best Practices
1. Start with a Few Packages
Test your task on 2-3 packages before scaling:
# Test on just api-gateway and user-service first
User: Spawn 2 agents to update @company/logger:
- packages/api-gateway
- packages/user-service
[... wait for results ...]
# If successful, scale to all packages
User: Spawn agents for remaining 18 packages
2. Make Tasks Explicit
Don’t rely on LLMs to “figure it out”:
Bad:
"Update logger across all packages"
Good:
"In package.json, change '@company/logger': '^2.0.0' to '^3.0.0'.
In all .ts files, replace logger.log() with logger.info().
Run npm test to verify."
3. Include Package-Specific Context
Each package may have unique needs:
// Spawn agent with package-specific CLAUDE.md
const agent = spawnAgent({
name: `update-${packageName}`,
workingDirectory: packagePath,
task: TASK_TEMPLATE,
context: {
files: [
`${packagePath}/CLAUDE.md`, // Package-specific conventions
`${packagePath}/package.json`,
],
},
});
4. Set Appropriate Timeouts
const agent = spawnAgent({
// ...
timeout: 600000, // 10 minutes (enough for tests to run)
});
5. Monitor Progress
Track agent status in real-time:
const agents = packages.map(pkg => spawnAgent({...}));
// Poll status every 30 seconds
const interval = setInterval(() => {
const statuses = agents.map(a => a.getStatus());
const completed = statuses.filter(s => s === 'completed').length;
const failed = statuses.filter(s => s === 'failed').length;
const running = statuses.filter(s => s === 'running').length;
console.log(`Progress: ${completed} completed, ${running} running, ${failed} failed`);
if (running === 0) {
clearInterval(interval);
}
}, 30000);
6. Collect Logs
Save agent logs for debugging:
const results = await Promise.allSettled(agents);
// Save logs
for (const [index, result] of results.entries()) {
const packageName = packages[index].split('/').pop();
const logPath = `logs/${packageName}.log`;
if (result.status === 'fulfilled') {
await writeFile(logPath, result.value.log);
} else {
await writeFile(logPath, result.reason.toString());
}
}
Common Pitfalls
Pitfall 1: Too Many Agents at Once
Problem: Spawning 100+ agents overwhelms your system
Solution: Batch agents (10-20 at a time)
const BATCH_SIZE = 10;
for (let i = 0; i < packages.length; i += BATCH_SIZE) {
const batch = packages.slice(i, i + BATCH_SIZE);
await updatePackages(batch);
}
Pitfall 2: Vague Task Descriptions
Problem: “Improve logging” is too vague
Solution: Be explicit about every change
Pitfall 3: Ignoring Package Dependencies
Problem: Updating Package A breaks Package B (which depends on A)
Solution: Use dependency-aware ordering (Pattern 2)
Pitfall 4: No Verification Step
Problem: Agents complete but changes are broken
Solution: Always run tests after agents finish
const results = await updatePackages(packages);
// Verify
execSync('npm run test:all');
execSync('npm run lint:all');
execSync('npm run typecheck:all');
Pitfall 5: Forgetting to Handle Failures
Problem: One failed agent blocks the entire update
Solution: Use Promise.allSettled() instead of Promise.all()
// Bad: One failure rejects everything
const results = await Promise.all(agents);
// Good: Collect all results (success and failure)
const results = await Promise.allSettled(agents);
Measuring Success
Key Metrics
1. Time Savings
Sequential: 20 packages x 8 min = 160 min
Parallel: max(agents) = 15 min
Speedup: 160 / 15 = 10.6x
2. Error Rate
Sequential: 4/20 packages had errors (20%)
Parallel: 0/20 packages had errors (0%)
Improvement: 100% reduction
3. Consistency Score
Check that all packages applied changes identically:
# Count logger.info() calls per package
for pkg in packages/*; do
count=$(rg "logger\.info\(" $pkg | wc -l)
echo "$pkg: $count"
done
# Should see consistent patterns
4. Developer Satisfaction
- Sequential: “Tedious, error-prone, frustrating”
- Parallel: “Fast, reliable, satisfying”
Conclusion
Parallel agents turn monorepo-wide changes from a dreaded chore into a quick, reliable operation:
Benefits:
- 10x faster: 2-3 hours to 15-20 minutes
- Consistent: Same task, same result across all packages
- Isolated errors: One failure doesn’t block others
- Reduced cognitive load: Each agent has focused context
- Better DX: Fire and forget, not babysitting
When to use:
- Identical task across packages
- Independent packages (microservices, libraries)
- Clear success criteria (tests pass, linter passes)
When not to use:
- Tasks requiring coordination between packages
- Vague, exploratory refactoring
- Tightly coupled packages with shared state
Next steps:
- Identify a simple update task (dependency upgrade)
- Test on 2-3 packages manually
- Automate with parallel agents
- Measure time savings and error reduction
- Scale to more complex tasks
The result: Monorepo-wide changes become routine operations instead of multi-hour ordeals, unlocking faster iteration and more confident refactoring.
Related Concepts
- 24/7 Development Strategy – Run parallel agents autonomously during nights and weekends
- Git Worktrees for Parallel Development – Use worktrees for isolated parallel agent execution
- YOLO Mode Configuration – Enable permission-free parallel agent execution
- Sub-Agent Architecture – Specialized agents for different package types in monorepos
- Model Switching Strategy – Use cheaper models for simple package updates to reduce parallel agent costs
- Claude Code Hooks: Quality Gates – Automated verification across all parallel agents
- Hierarchical Context Patterns – Package-specific CLAUDE.md files for agent context
- Integration Testing Patterns – Verify parallel changes don’t break cross-package integration
- Semantic Naming Patterns – Enable agents to discover packages and patterns efficiently
- Agent Swarm Patterns – Core patterns for parallel agent orchestration
- Agent Capabilities: Tools and Eyes – Equip parallel agents with appropriate tools
- Agent-Native Architecture – Design principles for parallel agent systems

