A newer version of the Gradio SDK is available:
6.1.0
metadata
title: Sprint Description Format (Sprint Planning PRD Style)
type: sprint_plan
version: '1.0'
created_by: AI Development Assistant
last_updated: <DATE>
Sprint Description Format Template
π Sprint Metadata
sprint_id: "<MVP_NUMBER>_S<SPRINT_NUMBER>"
sprint_name: "<DESCRIPTIVE_SPRINT_NAME>"
mvp_parent: "<MVP_ID_AND_NAME>"
sprint_number: <SPRINT_NUMBER>
status: "planned|in_progress|completed|blocked"
priority: "critical|high|medium|low"
timeline:
planned_start: "<YYYY-MM-DD>"
planned_end: "<YYYY-MM-DD>"
duration_estimate: "<X_hours|X_days>"
actual_start: "<YYYY-MM-DD>"
actual_end: "<YYYY-MM-DD>"
stakeholders:
sprint_lead: "<NAME>"
developers: ["<NAME1>", "<NAME2>"]
reviewers: ["<NAME1>", "<NAME2>"]
ai_assistant: "claude|cursor|other"
context:
development_approach: "ai_assisted_coding"
coding_standards: "ruff_black_mypy_pytest"
commit_convention: "conventional_commits"
π― Sprint Goal & Value Proposition
Sprint Objective
Primary Goal:
Problem Context
What We're Building On: What We're Solving: Why Now:
Success Criteria
Demo-able Outcome: Technical Milestone: Quality Standard:
ποΈ Technical Architecture & Implementation
Architecture Focus
Core Components: Integration Points: Data Flow Changes:
Technical Approach
implementation_strategy: "<APPROACH_DESCRIPTION>"
design_patterns: ["<PATTERN1>", "<PATTERN2>"]
technology_choices:
primary_languages: ["Python", "JavaScript"]
frameworks: ["FastAPI", "Gradio", "etc"]
testing_frameworks: ["pytest", "etc"]
development_tools: ["ruff", "mypy", "black"]
Key Technical Decisions
Decision:
- Rationale:
- Alternatives:
- Impact:
Decision:
- Rationale:
- Alternatives:
- Impact:
π Task Breakdown & Implementation Plan
Task Organization
total_tasks: <NUMBER>
task_methodology: "tdd_with_ai_assistance"
parallel_workstreams: <NUMBER>
task_size_target: "<SMALL|MEDIUM|LARGE> (X_hours each)"
Task List
| Task ID | Task Name | Priority | Estimate | Dependencies | Owner | Type |
|---|---|---|---|---|---|---|
| <HIGH | MED | LOW> | [, ] | |||
| <HIGH | MED | LOW> | [, ] | |||
| <HIGH | MED | LOW> | [, ] |
Task Details
Task :
status: "todo|in_progress|done|blocked"
priority: "critical|high|medium|low"
estimated_hours: <NUMBER>
dependencies: ["<TASK_ID1>", "<TASK_ID2>"]
type: "foundation|feature|integration|testing|documentation"
Objective:
Implementation Approach:
- Files to Modify: [
<FILE1>,<FILE2>,<FILE3>] - Key Classes/Functions:
- Testing Strategy:
Acceptance Criteria:
- Code quality gates passed (
ruff check,mypy,pytest)
AI Assistant Guidance:
For Claude/Cursor IDE:
- Focus areas: <SPECIFIC_CODING_FOCUS>
- Key patterns to follow: <DESIGN_PATTERNS_OR_CONVENTIONS>
- Testing approach: <TDD_STRATEGY_FOR_THIS_TASK>
- Integration points: <HOW_TO_CONNECT_WITH_EXISTING_CODE>
Task :
π§ͺ Testing & Quality Strategy
Testing Approach for This Sprint
unit_testing:
framework: "pytest"
coverage_target: ">=80%"
mock_strategy: "<MOCKING_APPROACH_FOR_SPRINT>"
tdd_approach: "write_tests_first_with_ai_assistance"
integration_testing:
scope: "<INTEGRATION_AREAS_TO_TEST>"
critical_flows: ["<FLOW1>", "<FLOW2>"]
automation_level: "manual|automated|hybrid"
manual_testing:
ui_testing: "<UI_TESTING_APPROACH>"
edge_cases: ["<EDGE_CASE1>", "<EDGE_CASE2>"]
browser_compatibility: "<BROWSER_TESTING_SCOPE>"
Quality Gates for Sprint
- Code Quality: All tasks pass ruff linting and mypy type checking
- Test Coverage: Sprint additions maintain >=80% overall coverage
- Functionality: All acceptance criteria verified through testing
- Integration: New components integrate seamlessly with existing system
- Performance: No regression in performance metrics
- Documentation: Code changes are properly documented
Definition of Done (Sprint Level)
- All sprint tasks completed and merged
- Sprint objective demonstrated and validated
- Quality gates passed for all new code
- Documentation updated (README, API docs, code comments)
- CI/CD pipeline green with all checks passing
- Sprint retrospective completed and lessons captured
π Dependencies & Assumptions
Prerequisites (What We Need Before Starting)
- Previous Sprint Deliverables:
- External Dependencies:
- Environment Setup:
- Access/Permissions:
Dependencies Within Sprint
critical_path_tasks: [<TASK_ID1>, <TASK_ID2>, <TASK_ID3>]
parallel_workstreams:
stream_1: [<TASK_ID1>, <TASK_ID2>]
stream_2: [<TASK_ID3>, <TASK_ID4>]
blocking_dependencies:
- task: <TASK_ID>
blocks: [<TASK_ID1>, <TASK_ID2>]
reason: "<WHY_THIS_BLOCKS_OTHERS>"
Assumptions
- Technical Assumptions:
- Resource Assumptions:
- External Service Assumptions:
- Timeline Assumptions:
β οΈ Risk Assessment & Mitigation
Risk Matrix
| Risk | Probability | Impact | Mitigation Strategy | Contingency Plan |
|---|---|---|---|---|
| <LOW | MED | HIGH> | <LOW | |
| <LOW | MED | HIGH> | <LOW |
Technical Risks
- Integration Complexity: β
- API Dependencies: β
- Performance Issues: β
Timeline Risks
- Task Estimation: β
- Blocking Dependencies: β
- External Delays: β
π Progress Tracking & Metrics
Daily Progress Indicators
completion_metrics:
- tasks_completed: "<NUMBER>/<TOTAL>"
- hours_spent: "<ACTUAL>/<ESTIMATED>"
- code_lines_added: "<NUMBER>"
- tests_written: "<NUMBER>"
quality_metrics:
- test_coverage: "<PERCENTAGE>"
- linting_errors: "<NUMBER>"
- type_coverage: "<PERCENTAGE>"
velocity_metrics:
- tasks_per_day: "<AVERAGE>"
- blockers_encountered: "<NUMBER>"
- ai_assist_efficiency: "<QUALITATIVE_ASSESSMENT>"
Sprint Burndown Tracking
- Day 1 Target:
- Day 2 Target:
- Day 3 Target:
- Final Target:
Success Metrics
- Primary KPI:
- Quality KPI:
- Velocity KPI:
π Deliverables & Handoffs
Sprint Deliverables
code_deliverables:
- component: "<COMPONENT_NAME>"
files: ["<FILE1>", "<FILE2>"]
status: "created|modified|refactored"
documentation_deliverables:
- document: "<DOC_NAME>"
type: "api|user|technical"
status: "created|updated"
testing_deliverables:
- test_suite: "<TEST_SUITE_NAME>"
coverage: "<PERCENTAGE>"
type: "unit|integration|e2e"
Integration Points
- With Previous Sprints:
- With Parallel Work:
- For Next Sprint:
Handoff Documentation
- Technical Handoff:
docs/sprints/mvp<N>_s<N>_technical_summary.md - User Guide Updates:
docs/user/mvp<N>_s<N>_features.md - API Changes:
docs/api/mvp<N>_s<N>_api_updates.md - Known Issues:
docs/sprints/mvp<N>_s<N>_known_issues.md
π Resources & References
Development Resources
- Code Standards:
.cursor/rules/directory - Testing Patterns:
tests/directory examples - Architecture Docs:
docs/architecture/ - API Documentation:
docs/api/
AI Assistant Resources
- Cursor Rules:
.cursor/rules/python_development.mdc - Code Patterns:
docs/patterns/directory - Testing Helpers:
tests/helpers/directory
External References
- Framework Documentation:
- Best Practices:
- Technical Specifications:
π Template Usage Guidelines
When to Use This Format
- Sprint planning sessions
- Task breakdown and estimation
- Risk assessment and mitigation planning
- Progress tracking and daily standups
- Sprint retrospectives and lessons learned
Customization Instructions
- Replace all
<PLACEHOLDER>values with sprint-specific information - Adjust task count and structure based on sprint complexity
- Add sprint-specific sections (e.g., UI design, data migration)
- Update risk assessment based on sprint-specific challenges
- Ensure dependencies and assumptions are accurately captured
Integration with Development Workflow
- Use this document for sprint planning meetings
- Reference task details during daily development
- Track progress against defined metrics
- Use for sprint retrospectives and continuous improvement
- Link to individual task files in
docs/tasks/mvp<N>/
AI Assistant Integration
- Include AI assistant guidance in each task
- Use for prompt engineering and context setting
- Reference for code quality and testing standards
- Integrate with Cursor IDE workflows
Document Status: Template
Next Review: Before sprint planning
Approval Required: Sprint Lead + MVP Owner