The Problem: Vague Tickets Lead to Poor AI Execution
When working with AI coding agents like Claude Code, ticket quality directly determines execution quality.
The Vague Ticket Problem
Consider this typical ticket:
# PROJ-123: Add user authentication
Description: We need authentication for the app.
What’s wrong with this ticket?
While a human developer can ask clarifying questions and make reasonable assumptions, an AI agent faces ambiguity:
- Which auth method? JWT, OAuth, session-based, Supabase, Auth0, custom?
- What endpoints? Login, logout, register, forgot password, 2FA?
- What requirements? Rate limiting? Password complexity? Session duration?
- What about errors? How to handle failed logins? Account lockout?
- Tests required? Integration tests? Unit tests? Coverage threshold?
- Edge cases? Concurrent sessions? Token refresh? Expired tokens?
The Cost of Ambiguity
When Claude Code executes a vague ticket:
Scenario 1: AI Makes Wrong Assumptions
1. AI implements custom JWT (you wanted Supabase)
2. 2 hours wasted on implementation
3. Code review catches mistake
4. Discard work, start over
5. Total waste: 2 hours + review time
Scenario 2: AI Asks Too Many Questions
1. AI asks: "Which auth method?"
2. You respond: "Supabase"
3. AI asks: "What about rate limiting?"
4. You respond: "10 attempts per minute"
5. AI asks: "Session duration?"
6. You respond: "24 hours"
7. ... 15 questions later ...
8. Finally starts implementing
9. Total time wasted: 30 minutes of back-and-forth
Scenario 3: AI Implements Incomplete Solution
1. AI implements basic login/logout
2. Missing: rate limiting, error states, tests
3. Code review fails quality gates
4. Must re-run with additional requirements
5. Total waste: Iteration time + review cycles
The Human Dilemma
You face two bad options:
Option A: Write Extremely Detailed Tickets
- Time: 20-30 minutes per ticket
- Effort: High cognitive load
- Result: Exhausting, doesn’t scale
Option B: Write Brief Tickets, Accept Poor AI Execution
- Time: 2 minutes per ticket
- Result: Constant intervention, wasted AI time
- Outcome: Defeats purpose of AI assistance
The gap: You need detailed tickets without spending 30 minutes writing them.
The Solution: Claude Refines, Claude Code Executes
Use two different AI tools in sequence:
- Claude (conversational): Refines vague tickets into detailed specifications
- Claude Code (autonomous agent): Executes refined tickets perfectly
The Workflow
Human: 2 minutes
↓
Write brief ticket ("Add user authentication")
↓
Claude: 1 minute
↓
Refine ticket (add details, criteria, edge cases)
↓
Claude Code: 30 minutes
↓
Execute refined ticket (implement, test, PR)
↓
Human: 5 minutes
↓
Review PR
Total human time: 7 minutes (2 min write + 5 min review)
Total AI time: 31 minutes (fully autonomous)
Result: 10x more effective AI execution
Why This Works
Claude (conversational mode) excels at:
- Asking clarifying questions
- Inferring implicit requirements
- Adding technical details
- Identifying edge cases
- Structuring acceptance criteria
- Estimating complexity
Claude Code (autonomous mode) excels at:
- Following detailed specifications
- Implementing features end-to-end
- Running tests and quality gates
- Creating PRs with clear descriptions
- Working autonomously without interruption
The key insight: Use the right tool for each phase.
Implementation
Step 1: Create Your Brief Ticket
In Linear (or any ticket system), create a minimal ticket:
# PROJ-123: Add user authentication
Description: We need authentication for the app.
Time investment: 2 minutes
Step 2: Use Claude to Refine the Ticket
Open Claude (conversational interface) and use this prompt:
I have a ticket that needs refinement for AI execution:
Title: Add user authentication
Description: We need authentication for the app.
Please refine this ticket by:
1. Adding specific implementation details
2. Defining clear acceptance criteria
3. Identifying edge cases and error states
4. Specifying test requirements
5. Listing technical constraints
6. Breaking down into concrete tasks
Context about our stack:
- Backend: Node.js + tRPC
- Frontend: React + TypeScript
- Database: PostgreSQL via Supabase
- Auth: Prefer Supabase Auth (already integrated)
- Testing: Vitest + React Testing Library
Step 3: Claude’s Refined Output
Claude responds with a comprehensive ticket:
# PROJ-123: Implement JWT-based Authentication with Supabase
## Description
Implement user authentication using Supabase Auth with JWT tokens, including login, logout, registration, and session management.
## Technical Approach
- Use Supabase Auth client library (@supabase/auth-helpers-nextjs)
- Implement JWT token validation middleware
- Store sessions in Supabase (server-side)
- Use React Context for client-side auth state
## Acceptance Criteria
### Core Functionality
- [ ] User can register with email/password
- [ ] User can log in with email/password
- [ ] User can log out (clears session)
- [ ] Protected routes redirect to login when unauthenticated
- [ ] Auth state persists across page refreshes
- [ ] Session expires after 24 hours of inactivity
### Security Requirements
- [ ] Rate limiting: 10 login attempts per 15 minutes per IP
- [ ] Password requirements: min 8 chars, 1 uppercase, 1 number
- [ ] JWT tokens include user ID and role
- [ ] Tokens validated on every protected API call
- [ ] HTTPS enforced for auth endpoints
### Error Handling
- [ ] Display error for invalid credentials
- [ ] Display error for weak passwords
- [ ] Display error for existing email (registration)
- [ ] Display error for rate limit exceeded
- [ ] Display error for expired session
- [ ] Graceful handling of network errors
### User Experience
- [ ] Loading states during auth operations
- [ ] Success messages for registration/login
- [ ] Redirect to original page after login
- [ ] Remember intended destination during auth flow
## Implementation Tasks
1. **Backend: Auth Middleware**
- File: `src/server/middleware/auth.ts`
- Validate JWT tokens from Authorization header
- Attach user object to request context
- Return 401 for invalid/missing tokens
2. **Backend: Protected tRPC Procedures**
- File: `src/server/trpc/procedures/protected.ts`
- Create `protectedProcedure` wrapper
- Use auth middleware for validation
- Type-safe user context in procedures
3. **Backend: Rate Limiting**
- File: `src/server/middleware/rate-limit.ts`
- Implement in-memory rate limiter (or use Redis)
- Track login attempts by IP address
- Return 429 with Retry-After header when exceeded
4. **Frontend: Auth Context**
- File: `src/contexts/AuthContext.tsx`
- Provide user state and auth methods
- Handle session persistence
- Expose login(), logout(), register() methods
5. **Frontend: Protected Route Component**
- File: `src/components/ProtectedRoute.tsx`
- Check auth state before rendering
- Redirect to /login with return URL
- Show loading state during auth check
6. **Frontend: Login Page**
- File: `src/pages/login.tsx`
- Email/password form with validation
- Error message display
- Loading state during submission
- Redirect to return URL after success
7. **Frontend: Registration Page**
- File: `src/pages/register.tsx`
- Email/password form with validation
- Password strength indicator
- Terms of service checkbox
- Auto-login after successful registration
## Test Requirements
### Integration Tests
- [ ] Login with valid credentials succeeds
- [ ] Login with invalid credentials fails with 401
- [ ] Registration with valid data creates user
- [ ] Registration with existing email fails
- [ ] Protected routes reject unauthenticated requests
- [ ] Rate limiting blocks after 10 failed attempts
- [ ] Session expiration works correctly
### Unit Tests
- [ ] Auth middleware validates tokens correctly
- [ ] Auth middleware rejects invalid tokens
- [ ] Rate limiter tracks attempts per IP
- [ ] Rate limiter resets after time window
- [ ] Password validation enforces requirements
### Coverage Requirements
- Minimum coverage: 80%
- Critical paths (auth middleware, rate limiting): 95%
## Edge Cases
1. **Concurrent Login Sessions**
- Allow multiple devices
- Track active sessions per user
- Provide "logout all devices" option
2. **Token Refresh**
- Refresh token before expiration (21 hours into 24-hour session)
- Handle refresh failures gracefully
- Logout user if refresh fails
3. **Network Interruptions**
- Retry failed auth requests (max 3 attempts)
- Show offline indicator
- Queue auth state changes locally
4. **Browser Tab Closure**
- Persist session in localStorage
- Validate session on tab reopen
- Handle stale sessions
## Non-Goals (Out of Scope)
- OAuth providers (Google, GitHub) - future ticket
- Two-factor authentication (2FA) - future ticket
- Magic link login - future ticket
- "Remember me" functionality - future ticket
- Password reset flow - separate ticket
## Success Criteria
All acceptance criteria met
All tests passing (coverage >=80%)
Quality gates pass (linting, type checking)
PR created with clear description
Documentation updated (API docs, setup guide)
## Estimated Complexity
**Points**: 5 (moderate complexity)
**Time**: 3-4 hours for implementation + tests
Claude’s refinement time: ~1 minute
Step 4: Copy Refined Ticket to Linear
Replace the original brief ticket with Claude’s refined version.
Time investment: 30 seconds (copy-paste)
Step 5: Execute with Claude Code
Now run Claude Code with the refined ticket:
claude-code "Implement PROJ-123 from Linear"
Claude Code:
- Reads the refined ticket
- Understands all requirements clearly
- Implements all 7 tasks
- Writes all required tests
- Passes quality gates
- Creates PR
- Updates ticket status
Claude Code execution time: ~30 minutes (autonomous)
Step 6: Review PR
Review the PR Claude Code created:
# PR #45: Implement JWT-based Authentication with Supabase
## Summary
Implements user authentication using Supabase Auth per PROJ-123.
## Changes
- Added auth middleware with JWT validation
- Created protected tRPC procedures
- Implemented rate limiting (10 attempts/15min)
- Added React auth context and protected routes
- Created login and registration pages
- Added comprehensive tests (87% coverage)
## Test Results
All 24 tests passing
Coverage: 87% (exceeds 80% requirement)
Linting: 0 errors
Type checking: 0 errors
## Acceptance Criteria Status
All 6 core functionality criteria met
All 5 security requirements met
All 6 error handling cases covered
All 4 UX requirements implemented
Ready for review.
Review time: 5 minutes (spot-check key files, verify tests)
Real-World Examples
Example 1: API Feature
Before refinement:
Title: Add search endpoint
Description: Users need to search products.
After Claude refinement:
Title: Implement Full-Text Product Search with Filtering
Description:
Add tRPC endpoint for product search with full-text search, category filtering,
price range, and pagination.
Acceptance Criteria:
- [ ] Full-text search on name, description (PostgreSQL FTS)
- [ ] Filter by category (multiple categories OR logic)
- [ ] Filter by price range (min/max)
- [ ] Pagination (limit 20 per page)
- [ ] Sort options: relevance, price_asc, price_desc, newest
- [ ] Response time <200ms for 10K products
- [ ] Debounced search input (300ms delay)
Implementation:
- Backend: tRPC procedure with Prisma query
- Database: Add GIN index on product name/description
- Frontend: Search input with filters sidebar
- Tests: Integration tests for all filter combinations
Edge Cases:
- Empty search query (return all products)
- No results found (friendly message)
- Invalid price range (validate min < max)
- XSS prevention (sanitize search input)
Result: Claude Code implements perfectly in 25 minutes.
Example 2: Bug Fix
Before refinement:
Title: Fix email validation
Description: Email validation is broken.
After Claude refinement:
Title: Fix Email Validation to Handle Plus Addressing and International Domains
Description:
Current email validation rejects valid emails with plus addressing ([email protected])
and international domains (.co.uk, .app, etc.).
Root Cause:
Regex pattern too restrictive: /^[a-z0-9]+@[a-z]+\.[a-z]{3}$/
Acceptance Criteria:
- [ ] Accept plus addressing: [email protected]
- [ ] Accept subdomains: [email protected]
- [ ] Accept international TLDs: .co.uk, .app, .io, etc.
- [ ] Accept numbers in domain: [email protected]
- [ ] Reject invalid formats: user@, @domain.com, user@domain
- [ ] Reject spaces: "user @domain.com"
Implementation:
- File: src/utils/validation.ts
- Use RFC 5322 compliant regex or validator library
- Add comprehensive unit tests (20+ test cases)
- Update existing tests that assumed old regex
Test Cases:
- Valid: [email protected], [email protected], [email protected]
- Invalid: user@, @domain.com, user@domain, user @domain.com
Edge Cases:
- Very long emails (>254 chars) - reject per RFC 5321
- Unicode characters - accept per RFC 6531
- Case sensitivity - normalize to lowercase
Result: Claude Code fixes in 10 minutes with comprehensive tests.
Example 3: Refactoring
Before refinement:
Title: Refactor auth code
Description: Auth code is messy.
After Claude refinement:
Title: Extract Auth Logic into Reusable Service Layer
Description:
Auth logic is duplicated across 5 tRPC procedures. Extract into service layer
following Onion Architecture pattern.
Current State:
- Auth logic scattered in: login, register, logout, refresh, verify procedures
- Duplication: token validation repeated 3 times
- Hard to test: auth logic tightly coupled to tRPC context
Target State:
- Single AuthService class with pure functions
- tRPC procedures become thin wrappers
- Testable in isolation
- Consistent error handling
Acceptance Criteria:
- [ ] Create src/services/auth/AuthService.ts
- [ ] Extract methods: validateToken, createSession, destroySession, refreshToken
- [ ] Update 5 tRPC procedures to use AuthService
- [ ] Zero duplication (verify with AST analysis)
- [ ] All existing tests still pass
- [ ] Add unit tests for AuthService (90% coverage)
- [ ] No behavior changes (purely structural)
Implementation Steps:
1. Create AuthService interface
2. Implement AuthService class
3. Add unit tests for AuthService
4. Refactor login procedure (verify tests pass)
5. Refactor register procedure (verify tests pass)
6. Refactor logout procedure (verify tests pass)
7. Refactor refresh procedure (verify tests pass)
8. Refactor verify procedure (verify tests pass)
9. Remove duplicated code
10. Final test run (all tests pass)
Safety:
- Incremental refactoring (one procedure at a time)
- Run tests after each change
- No functional changes
- Preserve existing API contracts
Result: Claude Code refactors safely in 40 minutes, all tests pass.
Refinement Prompt Template
Use this template for consistent refinement:
I have a ticket that needs refinement for AI execution:
---
Title: [TICKET_TITLE]
Description: [TICKET_DESCRIPTION]
---
Please refine this ticket by adding:
1. **Specific Implementation Details**
- Which files to modify/create
- Technical approach
- Libraries or patterns to use
2. **Clear Acceptance Criteria**
- Functional requirements (what it should do)
- Non-functional requirements (performance, security)
- User experience requirements
3. **Edge Cases and Error Handling**
- Boundary conditions
- Error states
- Failure modes
4. **Test Requirements**
- Integration tests needed
- Unit tests needed
- Coverage threshold
- Specific test cases
5. **Technical Constraints**
- Must use X library/pattern
- Must maintain backward compatibility
- Performance targets
- Security requirements
6. **Non-Goals**
- What NOT to implement
- Future features (out of scope)
Context about our stack:
- [YOUR_TECH_STACK]
- [YOUR_PATTERNS]
- [YOUR_CONSTRAINTS]
Format the output as a complete ticket ready for AI execution.
Best Practices
1. Provide Stack Context
Claude refines better with context:
Context about our stack:
- Backend: Node.js + tRPC + Prisma
- Frontend: React + TypeScript + TailwindCSS
- Database: PostgreSQL via Supabase
- Testing: Vitest + React Testing Library + Playwright
- Auth: Supabase Auth
- Patterns: Onion Architecture, Repository Pattern
- Code style: Functional programming, immutability preferred
2. Include Project Conventions
Our conventions:
- File naming: kebab-case.ts
- Test files: *.test.ts (colocated with source)
- Max file length: 200 lines
- Error handling: Result<T, E> pattern (no exceptions)
- API responses: { data: T } | { error: E }
3. Specify Quality Standards
Quality requirements:
- Test coverage: >=80% (critical paths >=95%)
- TypeScript: strict mode, no 'any' types
- Linting: 0 warnings
- Performance: API responses <200ms
- Accessibility: WCAG 2.1 AA compliance
4. Reference Related Work
Related context:
- Similar feature: PROJ-89 (rate limiting for uploads)
- Existing patterns: src/auth/rate-limiter.ts
- Follow same approach as PROJ-89
5. Indicate Complexity Level
Help Claude estimate scope:
Expected complexity: [SIMPLE | MODERATE | COMPLEX]
SIMPLE: <1 hour, 1-2 files, straightforward logic
MODERATE: 1-4 hours, 3-7 files, some complexity
COMPLEX: >4 hours, 8+ files, architectural changes
6. Add Safety Guardrails
For risky changes:
Safety requirements:
- Must maintain backward compatibility
- Must not modify database schema
- Must preserve existing API contracts
- Must be deployable independently
- Must include rollback plan
Integration with Other Patterns
Combine with Linear MCP Integration
Refine tickets before autonomous execution:
1. Write brief ticket in Linear
2. Use Claude to refine ticket
3. Update Linear ticket with refined version
4. Label as "Ready-For-Code"
5. Claude Code (via MCP) picks up ticket autonomously
6. Executes perfectly (no clarification needed)
Combine with Engineering Manager Mindset
Focus on ticket quality, delegate everything else:
Your role:
1. Write 10 brief tickets (20 minutes total)
2. Use Claude to refine them (10 minutes total)
3. Review 10 PRs (50 minutes total)
Total time: 80 minutes
Work completed: 10 features implemented
Leverage: 10x (AI did 800 minutes of implementation)
Combine with Trust But Verify Protocol
Claude refines ticket, Claude Code executes, Claude verifies:
1. Claude refines: "Add auth" -> detailed spec
2. Claude Code implements: follows spec exactly
3. Claude verifies: checks implementation matches spec
4. Human reviews: spot-checks verification results
Common Pitfalls
Pitfall 1: Not Providing Stack Context
Problem: Claude refines ticket without knowing your tech stack
Result: Suggests wrong libraries, incompatible patterns
Example:
Claude suggests: "Use Express.js"
Your stack: tRPC (no Express)
Solution: Always include stack context in refinement prompt
Pitfall 2: Accepting First Refinement Without Review
Problem: Claude’s refinement might miss project-specific nuances
Result: Claude Code implements something slightly wrong
Solution: Quick review of refined ticket (30 seconds), adjust if needed
Pitfall 3: Over-Refinement
Problem: Asking Claude to add too much detail
Result: 5-page ticket specification, overwhelming
Example:
Refined ticket: 847 lines
Claude Code: Confused by excessive detail
Solution: Aim for 50-150 lines (enough detail, not overwhelming)
Pitfall 4: No Non-Goals Section
Problem: Claude Code implements features you didn’t want
Result: Scope creep, wasted time
Example:
Ticket: "Add auth"
Claude Code implements: Auth + OAuth + 2FA + magic links
You wanted: Just basic auth
Solution: Always include “Non-Goals” section
Pitfall 5: Missing Edge Cases
Problem: Refined ticket doesn’t mention edge cases
Result: Claude Code misses them, bugs in production
Solution: Explicitly ask Claude to list edge cases during refinement
Measuring Success
Key Metrics
1. AI Execution Success Rate
Before refinement: 60% success (4/10 tickets executed correctly)
After refinement: 95% success (19/20 tickets executed correctly)
Improvement: 58% increase
2. Clarification Questions Reduced
Before refinement: 12 questions per ticket (avg)
After refinement: 1 question per ticket (avg)
Reduction: 92%
3. Implementation Time
Before refinement: 2.5 hours avg (with interruptions)
After refinement: 0.8 hours avg (autonomous)
Improvement: 68% faster
4. Human Time Investment
Writing detailed ticket manually: 25 minutes
Brief ticket + Claude refinement: 3 minutes
Savings: 88% less human time
5. Re-work Rate
Before refinement: 35% of implementations need rework
After refinement: 8% of implementations need rework
Improvement: 77% reduction
Success Criteria
Target metrics:
- AI execution success rate: >=90%
- Clarification questions: <=2 per ticket
- Human time per ticket: <=5 minutes (draft + review)
- Re-work rate: <=10%
The Meta-Level Insight
Meta-ticket refinement is itself a meta-pattern:
Traditional workflow:
Human writes ticket -> AI executes
(Brittle, requires perfect human ticket-writing)
Meta-workflow:
Human writes brief -> AI refines -> AI executes
(Robust, leverages AI's strengths at each phase)
Why this is “meta”:
- You’re using AI to prepare work for AI
- Claude refines tickets for Claude Code
- Human acts as orchestrator, not implementer
The pattern generalizes:
Human brief -> AI refine -> AI execute -> AI verify -> Human review
^ |
+----------- Iterate based on review ---------------+
The result: 10x more effective AI-assisted development.
Conclusion
Meta-ticket refinement solves the vague ticket problem by introducing a lightweight refinement step between human intent and AI execution.
The transformation:
Before:
Human writes ticket: 25 minutes (exhausting)
|
Claude Code executes: 2.5 hours (with interruptions)
|
Re-work required: 1 hour (35% of tickets)
Total: 4 hours per ticket, high failure rate
After:
Human writes brief: 2 minutes
|
Claude refines: 1 minute (autonomous)
|
Claude Code executes: 48 minutes (autonomous)
|
Human reviews: 5 minutes
Total: 7 minutes human time, 95% success rate
Key enablers:
- Use Claude (conversational) for refinement
- Use Claude Code (autonomous) for execution
- Provide stack context in refinement prompt
- Include acceptance criteria and edge cases
- Specify non-goals to prevent scope creep
- Target 50-150 lines for refined tickets
The result: Brief tickets transform into detailed specifications in seconds, enabling confident autonomous AI execution.

