Shared documentation for all AI coding assistants
This file is referenced by multiple AI tool configurations. Changes here automatically apply to all tools that support file references.
Every AI session MUST start and end with these commands:
# START (required first step)
uv run ai-start-task "Your task description"
# DURING (log progress)
uv run ai-log "Your progress message"
uv run ai-update-plan "Task item you completed"
# END (required last step)
uv run ai-finish-task --summary="What you accomplished"This is NOT optional! Every AI agent must follow this workflow.
BEFORE implementing any task, MUST research existing solutions!
# STEP 1: Check for MCP tools
# Look for mcp__docs__, mcp__context7__, etc.
# STEP 2: Fetch official documentation
WebFetch: url="https://docs.framework.com/api" prompt="What tools exist for [task]?"
# STEP 3: Search for recent tutorials
WebSearch: query="framework [specific-feature] tutorial 2025"
# STEP 4: Verify no built-in solution exists
# Only write custom code after confirming no existing solutionCritical Rules:
- ❌ NEVER start coding without researching documentation first
- ❌ NEVER reinvent functionality that already exists in frameworks
- ❌ NEVER reverse-engineer when official docs are available
- ✅ ALWAYS use MCP tools to fetch latest documentation
- ✅ ALWAYS check for built-in framework tools before custom code
- ✅ ALWAYS leverage official integrations and SDKs
See @AI_DOCS/documentation-first-approach.md for complete guidelines and examples.
| Tool | Purpose | When to Use |
|---|---|---|
ai-start-task |
Start new task session | Always first - before any work |
ai-log |
Log execution progress | During work - track milestones |
ai-update-plan |
Update task plan checkboxes | After completing each step |
ai-context-summary |
Show current context | When unsure what's happening |
ai-check-conflicts |
Check for task conflicts | Before starting parallel work |
ai-add-decision |
Document architectural decision | After making design choice |
ai-add-convention |
Add code convention | When establishing new pattern |
ai-finish-task |
End session and update context | Always last - when work complete |
Start a new AI task session with full context loading and intelligent auto-population.
uv run ai-start-task "Add user authentication feature"What it does:
- Creates unique session ID (timestamp-based)
- Generates three session files:
PLAN-*.md- Task plan with checkboxesSUMMARY-*.md- Session summary templateEXECUTION-*.md- Execution log
- Auto-populates PLAN sections:
- Extracts and expands Objective from task description
- Generates intelligent Context summary with dependencies
- Analyzes task type and adds relevant guidance
- Identifies potential risks and considerations
- Displays last session summary
- Shows active tasks (checks for conflicts)
- Lists recent architectural decisions
- Shows key conventions to follow
- Adds task to
ACTIVE_TASKS.md
New: Intelligent Auto-Population
The ai-start-task command now uses NLP-based summarization to intelligently populate your PLAN file:
- Objective Section: Automatically extracts intent from task description and creates a clear, detailed objective statement
- Context Section: Generates context based on:
- Existing decisions in RECENT_DECISIONS.md
- Coding conventions from CONVENTIONS.md
- Task type (feature, bugfix, refactor, docs)
- Inferred dependencies from task description
- Risk Analysis: Identifies potential risks like database changes, API breaking changes, security implications
Before (manual placeholder):
## Objective
[Describe what needs to be accomplished]After (auto-populated):
## Objective
Implement user authentication feature with JWT tokens.
This will ensure security best practices are followed.Options:
# Specify task type
uv run ai-start-task "Fix bug in validator" --type=bugfix
uv run ai-start-task "Improve performance" --type=enhancement
uv run ai-start-task "Update docs" --type=documentation
# Default type is "feature"Output example:
╔══════════════════════════════════════════════════════════╗
║ 🚀 AI Task Session Started ║
╚══════════════════════════════════════════════════════════╝
📋 Task: Add user authentication feature
🆔 Session ID: 20250102123045
📁 Session files created:
- .ai-context/sessions/20250102123045-PLAN-Add_user_authentication_feature.md
- .ai-context/sessions/20250102123045-SUMMARY-Add_user_authentication_feature.md
- .ai-context/sessions/20250102123045-EXECUTION-Add_user_authentication_feature.md
📊 Last Session Summary
────────────────────────
- Previous session: Implemented email validation
- Status: ✅ Complete
- Coverage: 100%
⚠️ Active Tasks
────────────────────────
No conflicting tasks in progress
📌 Recent Decisions (Last 5)
────────────────────────
1. Use Pydantic for data validation
2. Follow Google docstring format
3. Minimum 80% test coverage
🎯 Key Conventions
────────────────────────
- Always write tests first (TDD)
- Use type hints on all functions
- Run `make check` before committing
Log progress messages to the session execution log.
uv run ai-log "Created test_auth.py with 8 test cases"
uv run ai-log "All tests passing" --level=success
uv run ai-log "Found mypy error in line 45" --level=warning
uv run ai-log "Build failed" --level=errorOptions:
--level=info(default) - Regular progress update--level=success- Success milestone--level=warning- Warning or concern--level=error- Error encountered--session-id=<id>- Log to specific session (defaults to latest)
What it does:
- Appends timestamped entry to
EXECUTION-*.md - Adds emoji based on level (ℹ️ 📝 ✅
⚠️ ❌) - Creates audit trail of work done
Usage during work:
# After writing tests
uv run ai-log "Created test_validators.py with 6 tests"
# After implementation
uv run ai-log "Implemented validate_email() function"
# After quality checks
uv run ai-log "All quality checks pass" --level=success
# If error found
uv run ai-log "Coverage at 78%, need 2 more tests" --level=warningManage task plans with two modes: checkbox toggling and full plan editing.
Checkbox Mode (default) - Toggle items as complete:
uv run ai-update-plan "Write test file(s)"Edit Mode - Modify plan structure:
uv run ai-update-plan --add "New task item"
uv run ai-update-plan --remove "Task to delete"
uv run ai-update-plan --rename "Old text" --to "New text"Mark items complete:
# Toggle checkbox for matching item
uv run ai-update-plan "Write test file(s)"
# Show current progress
uv run ai-update-plan --show
# Output: ✅ Progress: 3/7 completed
# Specific session
uv run ai-update-plan "Implement functionality" --session-id=20250102123045Fuzzy matching with helpful suggestions:
# Typo in item name
uv run ai-update-plan "Write test fils"
# Output:
# ❌ Error: Item not found: "Write test fils"
#
# 📝 Did you mean one of these?
# 1. Write test file(s) (90% match)
# 2. Run tests (should fail initially) (65% match)What it does:
- Toggles checkbox between
[ ]and[x] - Shows progress percentage
- Uses fuzzy matching if exact match not found
- Suggests similar items when item not found
Add new items:
# Add to last phase
uv run ai-update-plan --add "Run performance benchmarks"
# Add to specific phase
uv run ai-update-plan --add "Validate API responses" --phase "Phase 2"
# Add new phase
uv run ai-update-plan --add-phase "Phase 5: Performance Testing"Remove items:
# Remove by exact match
uv run ai-update-plan --remove "Update documentation if needed"
# Remove with fuzzy matching
uv run ai-update-plan --remove "Update docs"
# Suggests: Did you mean "Update documentation if needed"?Rename items:
# Rename for clarity
uv run ai-update-plan --rename "Run tests" --to "Run tests with pytest"
# Rename with fuzzy matching
uv run ai-update-plan --rename "Run test" --to "Run all tests with coverage"
# Finds "Run tests (should fail initially)" even with typoWhat edit mode does:
- Validates items are not empty or whitespace-only
- Validates target phases exist (suggests available phases if not)
- Prevents duplicate items (case-insensitive)
- Uses fuzzy matching for --remove and --rename
- Maintains plan structure and formatting
Empty item validation:
uv run ai-update-plan --add " "
# Output:
# ❌ Error: Item text cannot be empty or whitespace-only.
# Please provide meaningful item text.Phase existence validation:
uv run ai-update-plan --add "New task" --phase "Phase 10"
# Output:
# ❌ Error: Phase 'Phase 10' does not exist in plan.
#
# Available phases:
# • ### Phase 1: Research & Design
# • ### Phase 2: Write Tests (TDD)
# • ### Phase 3: Implementation
# • ### Phase 4: Quality Checks
#
# Use --add-phase to create it, or choose an existing phase.Duplicate detection:
uv run ai-update-plan --add "Write test file(s)"
# Output:
# ❌ Error: Item "Write test file(s)" already exists in plan.Typical checkbox workflow:
# After writing tests
uv run ai-update-plan "Write test file(s)"
# ✅ Progress: 1/7 completed
# After running tests
uv run ai-update-plan "Run tests (should fail initially)"
# ✅ Progress: 2/7 completed
# Check what's left
uv run ai-update-plan --showCustomizing your plan:
# Add task-specific steps
uv run ai-update-plan --add "Set up database migrations" --phase "Phase 1"
uv run ai-update-plan --add "Test error handling" --phase "Phase 2"
# Remove irrelevant generic items
uv run ai-update-plan --remove "Review code changes"
# Rename generic items to be specific
uv run ai-update-plan --rename "Implement functionality" --to "Implement user authentication with JWT"
# Add a new phase if needed
uv run ai-update-plan --add-phase "Phase 5: Integration Testing"
uv run ai-update-plan --add "Test with external API" --phase "Phase 5"Handling typos with fuzzy matching:
# Typo in checkbox mode
uv run ai-update-plan "Implmeent functinality"
# Suggests: "Implement functionality" (85% match)
# Typo in remove mode
uv run ai-update-plan --remove "Implment functinality"
# Asks: Did you mean "Implement functionality"?Every task plan includes these checkboxes:
Phase 1: Research & Design
- Review related code and patterns
- Identify affected components
- Design approach and architecture
Phase 2: Write Tests (TDD)
- Identify test scenarios and edge cases
- Write test file(s)
- Run tests to confirm they fail
Phase 3: Implementation
- Implement core functionality
- Run tests to confirm they pass
- Verify 80%+ coverage
- Handle edge cases and error conditions
Phase 4: Quality Checks
- Run
make format - Run
make lint - Run
make test - Fix any issues found
- Run
make check- all pass
Phase 5: Documentation
- Update docstrings
- Add type hints to all functions
- Update README if user-facing change
- Add inline comments for complex logic
Customize these using --add, --remove, and --rename for your specific task!
Display current AI context and important information.
# Quick summary
uv run ai-context-summary
# Detailed summary
uv run ai-context-summary --detailedWhat it shows:
Quick mode:
- Last session summary (brief)
- Active tasks count
- Recent decisions count
- Conventions count
Detailed mode:
- Full last session summary
- All active tasks with timestamps
- All recent decisions (up to 10)
- All code conventions
- Session files location
When to use:
- Unsure what was done previously
- Need to check active tasks
- Want to see recent decisions
- Starting work after break
Check if proposed task conflicts with active tasks.
uv run ai-check-conflicts "Add email validation"What it does:
- Reads
ACTIVE_TASKS.md - Uses fuzzy matching (70% similarity threshold)
- Warns if similar tasks exist
- Prevents duplicate work
Output example:
🔍 Checking for conflicts with: "Add email validation"
⚠️ Potential Conflicts Found:
────────────────────────
- "Add user validation" (Started: 2025-01-02 10:30)
Similarity: 75%
Recommendation: Check if these tasks overlap before proceeding.
Document an architectural or design decision.
uv run ai-add-decisionInteractive prompts:
- Decision title
- Context/problem
- Decision made
- Rationale
- Consequences
What it does:
- Adds entry to
RECENT_DECISIONS.md - Timestamps decision
- Shows decision to future AI agents
Example:
## 2025-01-02: Use Pydantic for Data Validation
**Context**: Need robust validation for API inputs
**Decision**: Use Pydantic v2 for all data validation
**Rationale**:
- Type-safe validation
- Great error messages
- Widely adopted
**Consequences**:
- Add pydantic dependency
- All data models extend BaseModel
- Validation errors are consistentWhen to use:
- Chose between architectural approaches
- Made library/framework decision
- Established pattern for future use
- Changed existing approach significantly
Add or update code conventions.
uv run ai-add-conventionInteractive prompts:
- Convention category (existing or new)
- Convention description
What it does:
- Adds to
CONVENTIONS.md - Organizes by category
- Shows conventions to future AI agents
Example:
## Error Handling
- Always use custom exception classes
- Never use bare `except:` clauses
- Log errors before re-raising
## Naming Conventions
- Use snake_case for functions and variables
- Use PascalCase for classes
- Prefix private methods with underscoreWhen to use:
- Established new coding pattern
- Created reusable approach
- Want consistency across codebase
- Made style decision
Finalize session and update all context files.
uv run ai-finish-task --summary="Implemented email validation with 100% test coverage"What it does:
- Updates
LAST_SESSION_SUMMARY.mdwith:- Task description
- What was accomplished
- Files changed
- Coverage/quality metrics
- Status
- Removes task from
ACTIVE_TASKS.md - Archives old session files (keeps last 10)
- Prepares context for next AI agent
Options:
# Required: summary of work done
uv run ai-finish-task --summary="Fixed validation bug, added 3 tests"
# Specific session
uv run ai-finish-task --summary="Done" --session-id=20250102123045Output example:
╔══════════════════════════════════════════════════════════╗
║ ✅ Task Session Complete ║
╚══════════════════════════════════════════════════════════╝
📋 Task: Add user authentication feature
🆔 Session ID: 20250102123045
✅ Status: Complete
📝 Summary Updated:
.ai-context/LAST_SESSION_SUMMARY.md
🗂️ Old Sessions Archived:
- Kept last 10 sessions
- Removed 3 older sessions
🎯 Active Tasks: 0
# ═══════════════════════════════════════════════════════
# STEP 1: START SESSION (Required)
# ═══════════════════════════════════════════════════════
uv run ai-start-task "Add phone number validation"
# Output shows:
# - Last session summary
# - No conflicts
# - Recent decisions: "Use Pydantic for validation"
# - Convention: "Write tests first"
# ═══════════════════════════════════════════════════════
# STEP 1.5: CUSTOMIZE PLAN (Optional but Recommended)
# ═══════════════════════════════════════════════════════
# Add task-specific items
uv run ai-update-plan --add "Research phone number formats (US, international)" --phase "Phase 1"
uv run ai-update-plan --add "Test invalid formats (too short, too long, letters)" --phase "Phase 2"
# Remove irrelevant generic items
uv run ai-update-plan --remove "Review code changes"
# Rename generic items to be specific
uv run ai-update-plan --rename "Implement functionality" --to "Implement validate_phone() with regex"
# Check customized plan
uv run ai-update-plan --show
# Output: ✅ Progress: 0/9 completed (added 2, removed 1)
# ═══════════════════════════════════════════════════════
# STEP 2: WORK (Following TDD)
# ═══════════════════════════════════════════════════════
# Research phase
uv run ai-log "Researching phone number validation patterns"
uv run ai-update-plan "Research phone number formats (US, international)"
# Write tests first
uv run ai-log "Starting TDD workflow"
# ... create tests/test_validators.py ...
uv run ai-log "Created test_validators.py with 6 parametrized tests"
uv run ai-update-plan "Write test file(s)"
uv run ai-update-plan "Test invalid formats (too short, too long, letters)"
# Run tests (should fail)
make test
uv run ai-log "Tests fail as expected - function doesn't exist"
uv run ai-update-plan "Run tests to confirm they fail"
# Implement function
# ... create src/python_modern_template/validators.py ...
uv run ai-log "Implemented validate_phone() function"
uv run ai-update-plan "Implement validate_phone() with regex"
# Run tests (should pass)
make test
uv run ai-log "All 6 tests now pass" --level=success
uv run ai-update-plan "Run tests to confirm they pass"
# Run quality checks
make check
uv run ai-log "make check passes - 100% coverage" --level=success
uv run ai-update-plan "Run make check - all pass"
# Check progress
uv run ai-update-plan --show
# Output: ✅ Progress: 8/9 completed
# Update docs (if needed)
uv run ai-update-plan "Update README if user-facing change"
# ═══════════════════════════════════════════════════════
# STEP 3: FINISH SESSION (Required)
# ═══════════════════════════════════════════════════════
uv run ai-finish-task --summary="Added phone validation with 6 tests, 100% coverage, all quality checks pass"
# Updates:
# - LAST_SESSION_SUMMARY.md (for next AI)
# - ACTIVE_TASKS.md (removes task)
# - Archives old sessions- ALWAYS run
ai-start-taskbefore ANY work - ALWAYS research documentation before implementing (see
@AI_DOCS/documentation-first-approach.md) - ALWAYS check for MCP tools and use WebFetch/WebSearch before coding
- ALWAYS run
ai-finish-taskwhen complete - NEVER skip
ai-logfor important milestones - ALWAYS check
ai-context-summaryif unsure what to do - NEVER start a task without checking for conflicts first
- NEVER reinvent functionality that already exists in frameworks
All context files are in .ai-context/:
REQUIRED_READING.md- Master checklist for all AI agentsLAST_SESSION_SUMMARY.md- Most recent session summaryACTIVE_TASKS.md- Tasks currently in progressRECENT_DECISIONS.md- Architectural decisions madeCONVENTIONS.md- Code patterns and standards
sessions/YYYYMMDDHHMMSS-PLAN-*.md- Task plansessions/YYYYMMDDHHMMSS-SUMMARY-*.md- Task summarysessions/YYYYMMDDHHMMSS-EXECUTION-*.md- Execution log
- Context Continuity: Next AI agent (even different tool) knows what you did
- No Duplicate Work: Prevents starting tasks already in progress
- Decision Tracking: Important architectural choices are documented
- Quality Assurance: Ensures all workflow steps are followed
- Team Coordination: Multiple developers can see what AI has done
When an AI agent finishes and another starts:
Agent 1 (finishing):
uv run ai-finish-task --summary="Implemented feature X, added 10 tests, 95% coverage"Agent 2 (starting):
uv run ai-start-task "Add feature Y"What Agent 2 sees:
📊 Last Session Summary
────────────────────────
Agent: Claude Code
Task: Implemented feature X
What was done:
- Added 10 tests
- 95% coverage
- All quality checks pass
Files changed:
- src/module.py
- tests/test_module.py
Status: ✅ Complete
This ensures perfect handoff between different AI tools!
# List available sessions
ls .ai-context/sessions/
# Specify session ID manually
uv run ai-log "Message" --session-id=20250102123045# Start a new session first
uv run ai-start-task "Your task"# Show plan to see exact wording
uv run ai-update-plan --show
# Use exact text from plan
uv run ai-update-plan "Exact item text from plan"
# Or rely on fuzzy matching (handles typos)
uv run ai-update-plan "approximate text"
# Tool will suggest: "Did you mean: Exact item text from plan?"# Error when trying to add empty or whitespace-only items
uv run ai-update-plan --add " "
# Fix: Provide meaningful text
uv run ai-update-plan --add "Implement error handling"# Error when targeting non-existent phase
uv run ai-update-plan --add "New task" --phase "Phase 10"
# Fix: Use --show to see available phases
uv run ai-update-plan --show
# Then use existing phase or create new one
uv run ai-update-plan --add "New task" --phase "Phase 2"
# OR
uv run ai-update-plan --add-phase "Phase 6: New Phase Name"
uv run ai-update-plan --add "New task" --phase "Phase 6"# Error when adding duplicate items
uv run ai-update-plan --add "Write test file(s)"
# Error: Item "Write test file(s)" already exists in plan.
# Fix: Check plan for existing items
uv run ai-update-plan --show
# Or add a more specific item
uv run ai-update-plan --add "Write integration test files"# Error when using --rename without --to
uv run ai-update-plan --rename "Old item"
# Fix: Always provide --to with --rename
uv run ai-update-plan --rename "Old item" --to "New item text"Remember: Session management is mandatory. Start with ai-start-task, finish with ai-finish-task, log progress throughout.