Learn how to generate Kanban-optimized issues for AI-assisted development workflows.
Kanban-optimized issues are work items designed for continuous flow rather than batched delivery. In AI-assisted development, this means:
- Small, focused tasks: <1 hour to complete
- Independently deployable: Each task can ship without waiting for others
- Continuous feedback: Deploy and validate multiple times per day
- Reduced work-in-progress: Finish tasks quickly, don't accumulate partially-done work
AI fundamentally changes development velocity. What used to take hours now takes minutes:
Planning (30 min) → Coding (6 hours) → Testing (1 hour) → Review (30 min) = 8 hours
Planning (5 min) → AI prompting (10 min) → Review/refine (20 min) → Testing (15 min) → Deploy (10 min) = 60 min
| Aspect | Manual Coding | AI-Assisted |
|---|---|---|
| Code generation | Hours of typing | Minutes of prompting |
| Iteration cycle | Hours per change | 5-15 minutes per iteration |
| Testing | Manual, sequential | Automated, parallel |
| Deployment | 1-2x per day | Multiple times per hour |
| Feedback loop | End of day | Immediate |
Based on AI-assisted development:
- JWT token generation: 30-45 minutes (not 8 hours)
- API endpoint: 30-40 minutes (not 6 hours)
- Database migration: 15-25 minutes (not 3 hours)
- Email template: 15-20 minutes (not 4 hours)
- Password reset form: 20-30 minutes (not 4 hours)
Don't spend time estimating 4, 8, or 13 story points. Instead:
- Split work until each task is <1 hour
- Measure actual cycle time from start to production
- Optimize for throughput, not utilization
❌ "This epic is 40 story points"
✅ "This epic has 12 tasks, each <1 hour"Every task should be deployable without waiting for other tasks:
✅ "Add JWT token generation function"
→ Can deploy with feature flag
→ Doesn't break existing code
→ Tests pass independently
❌ "Implement complete authentication system"
→ Can't deploy until entire system is done
→ High risk, long feedback cycleBreak even small tasks into three phases:
# Task: "Add JWT token validation middleware" (<1 hour total)
## RED (15 minutes)
- Write failing test for middleware
- Define expected behavior
- Deploy test suite (CI runs, test fails as expected)
## GREEN (25 minutes)
- AI generates minimum implementation
- Make test pass
- Deploy behind feature flag
## REFACTOR (20 minutes)
- Clean up generated code
- Extract key management
- Add documentation
- Deploy refactored versionEach task should trigger and pass through your CI/CD pipeline:
Task: "Add password reset endpoint"
↓
Commit → CI Pipeline:
- Lint check ✓
- Unit tests ✓
- Integration tests ✓
- Security scan ✓
- Deploy to dev ✓
↓
Production (with feature flag)❌ "Implement user authentication system"
- Too many components
- Can't deploy incrementally
- High risk✅ "Add JWT token generation function"
✅ "Add token validation middleware"
✅ "Create /auth/login endpoint"
✅ "Add password hashing utility"
✅ "Write integration tests for auth flow"
✅ "Add rate limiting to login"
✅ "Create session management middleware"ai "Split this feature into <1 hour tasks for AI-assisted development:
Feature: User password reset via email
Requirements:
- Each task <1 hour with AI assistance
- Independently deployable
- Include RED/GREEN/REFACTOR phases
- Specify CI/CD requirements"See 02-choosing-tools.md to decide between:
- Beads: Git-native, CLI-first, offline-capable (ideal for AI agents)
- GitHub/JIRA/Linear: Web UI, team collaboration, enterprise features
See 03-ai-prompts.md for templates to:
- Break down epics into <1 hour tasks
- Generate RED/GREEN/REFACTOR acceptance criteria
- Create dependency chains
- Track progress automatically
See 04-workflow-examples.md for:
- Epic breakdown examples
- Relationship management patterns
- Progress tracking strategies
- Dependency validation
See 05-ci-integration.md for:
- Traceability patterns
- File change validation
- Coverage enforcement
- Automated reporting
# 1. Create an epic
ai "Create epic: User Dashboard with Analytics
Break into <1 hour tasks, each independently deployable"
# 2. AI generates 8 tasks:
- Create analytics_events table (25 min)
- Add /api/analytics/events endpoint (35 min)
- Create dashboard data aggregation service (45 min)
- Add GET /api/dashboard/stats endpoint (30 min)
- Create DashboardCard React component (40 min)
- Add chart visualization component (50 min)
- Write integration tests for analytics flow (45 min)
- Add real-time updates with WebSocket (55 min)
# 3. Work on first task
git checkout -b analytics-events-table
ai "Implement analytics_events table migration following RED/GREEN/REFACTOR"
git add . && git commit -m "feat: add analytics events table"
git push && create-pr
# 4. CI runs automatically
- Tests pass ✓
- Deploy to dev ✓
- Merge to main ✓
# 5. Repeat for next task (all done in <8 hours total)"This will take 4-8 hours"
→ AI can do it in 30-45 minutes"Implement authentication" (4-6 hours even with AI)
→ Split into 8 tasks of <1 hour each"Let's finish all 8 tasks then deploy"
→ Deploy each task as soon as it's doneai "Write complete auth system with tests"
→ Break into RED (tests) → GREEN (impl) → REFACTOR (clean)- Faster feedback: Find problems in minutes, not days
- Reduced risk: Smaller changes = easier debugging
- Better flow: No tasks stuck "in progress" for hours
- Higher quality: More frequent testing and validation
- Team visibility: Clear progress throughout the day
- AI advantage: Leverages AI speed, not constrained by typing
- Choose your tools: 02-choosing-tools.md
- Learn AI prompts: 03-ai-prompts.md
- See examples: 04-workflow-examples.md
- Integrate CI/CD: 05-ci-integration.md
# Task sizing rule
Target: <1 hour
Maximum: 2 hours (if longer, split it)
Optimal: 15-45 minutes
# Task checklist
□ <1 hour with AI assistance?
□ Independently deployable?
□ Includes tests?
□ Passes through CI/CD?
□ RED/GREEN/REFACTOR phases defined?
# Daily goal
8-10 tasks per developer per day
(not 1-2 large tasks)Remember: AI changes everything. Don't use manual coding timeframes. Embrace <1 hour tasks for maximum velocity and feedback.