diff --git a/.kilocode/workflows/speckit.fix.md b/.kilocode/workflows/speckit.fix.md new file mode 100644 index 0000000..1348772 --- /dev/null +++ b/.kilocode/workflows/speckit.fix.md @@ -0,0 +1,199 @@ +--- + +description: Fix failing tests and implementation issues based on test reports + +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Goal + +Analyze test failure reports, identify root causes, and fix implementation issues while preserving semantic protocol compliance. + +## Operating Constraints + +1. **USE CODER MODE**: Always switch to `coder` mode for code fixes +2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic. +3. **TEST DATA**: If tests use @TEST_DATA fixtures, preserve them when fixing +4. **NO DELETION**: Never delete existing tests or semantic annotations +5. **REPORT FIRST**: Always write a fix report before making changes + +## Execution Steps + +### 1. Load Test Report + +**Required**: Test report file path (e.g., `specs//tests/reports/2026-02-19-report.md`) + +**Parse the report for**: +- Failed test cases +- Error messages +- Stack traces +- Expected vs actual behavior +- Affected modules/files + +### 2. Analyze Root Causes + +For each failed test: + +1. **Read the test file** to understand what it's testing +2. **Read the implementation file** to find the bug +3. **Check semantic protocol compliance**: + - Does the implementation have correct [DEF] anchors? + - Are @TAGS (@PRE, @POST, @UX_STATE, etc.) present? + - Does the code match the TIER requirements? +4. **Identify the fix**: + - Logic error in implementation + - Missing error handling + - Incorrect API usage + - State management issue + +### 3. Write Fix Report + +Create a structured fix report: + +```markdown +# Fix Report: [FEATURE] + +**Date**: [YYYY-MM-DD] +**Report**: [Test Report Path] +**Fixer**: Coder Agent + +## Summary + +- Total Failed Tests: [X] +- Total Fixed: [X] +- Total Skipped: [X] + +## Failed Tests Analysis + +### Test: [Test Name] + +**File**: `path/to/test.py` +**Error**: [Error message] + +**Root Cause**: [Explanation of why test failed] + +**Fix Required**: [Description of fix] + +**Status**: [Pending/In Progress/Completed] + +## Fixes Applied + +### Fix 1: [Description] + +**Affected File**: `path/to/file.py` +**Test Affected**: `[Test Name]` + +**Changes**: +```diff +<<<<<<< SEARCH +[Original Code] +======= +[Fixed Code] +>>>>>>> REPLACE +``` + +**Verification**: [How to verify fix works] + +**Semantic Integrity**: [Confirmed annotations preserved] + +## Next Steps + +- [ ] Run tests to verify fix: `cd backend && .venv/bin/python3 -m pytest` +- [ ] Check for related failing tests +- [ ] Update test documentation if needed +``` + +### 4. Apply Fixes (in Coder Mode) + +Switch to `coder` mode and apply fixes: + +1. **Read the implementation file** to get exact content +2. **Apply the fix** using apply_diff +3. **Preserve all semantic annotations**: + - Keep [DEF:...] and [/DEF:...] anchors + - Keep all @TAGS (@PURPOSE, @LAYER, @TIER, @RELATION, @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY) +4. **Only update code logic** to fix the bug +5. **Run tests** to verify the fix + +### 5. Verification + +After applying fixes: + +1. **Run tests**: + ```bash + cd backend && .venv/bin/python3 -m pytest -v + ``` + or + ```bash + cd frontend && npm run test + ``` + +2. **Check test results**: + - Failed tests should now pass + - No new tests should fail + - Coverage should not decrease + +3. **Update fix report** with results: + - Mark fixes as completed + - Add verification steps + - Note any remaining issues + +## Output + +Generate final fix report: + +```markdown +# Fix Report: [FEATURE] - COMPLETED + +**Date**: [YYYY-MM-DD] +**Report**: [Test Report Path] +**Fixer**: Coder Agent + +## Summary + +- Total Failed Tests: [X] +- Total Fixed: [X] ✅ +- Total Skipped: [X] + +## Fixes Applied + +### Fix 1: [Description] ✅ + +**Affected File**: `path/to/file.py` +**Test Affected**: `[Test Name]` + +**Changes**: [Summary of changes] + +**Verification**: All tests pass ✅ + +**Semantic Integrity**: Preserved ✅ + +## Test Results + +``` +[Full test output showing all passing tests] +``` + +## Recommendations + +- [ ] Monitor for similar issues +- [ ] Update documentation if needed +- [ ] Consider adding more tests for edge cases + +## Related Files + +- Test Report: [path] +- Implementation: [path] +- Test File: [path] +``` + +## Context for Fixing + +$ARGUMENTS \ No newline at end of file diff --git a/.kilocodemodes b/.kilocodemodes index d6ce9b9..5feb743 100644 --- a/.kilocodemodes +++ b/.kilocodemodes @@ -26,6 +26,35 @@ customModes: 6. DOCUMENTATION: Create test reports in `specs//tests/reports/YYYY-MM-DD-report.md`. 7. COVERAGE: Aim for maximum coverage but prioritize CRITICAL and STANDARD tier modules. 8. RUN TESTS: Execute tests using `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`. + - slug: coder + name: Coder + description: Implementation Specialist - Semantic Protocol Compliant + roleDefinition: |- + You are Kilo Code, acting as an Implementation Specialist. Your primary goal is to write code that strictly follows the Semantic Protocol defined in `semantic_protocol.md`. + Your responsibilities include: + - SEMANTIC ANNOTATIONS: Add mandatory [DEF]...[/DEF] anchors and @TAGS to all code entities. + - CONTRACT COMPLIANCE: Implement @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY tags correctly. + - TIER ADHERENCE: Follow tier requirements (CRITICAL: full contract, STANDARD: basic contract, TRIVIAL: minimal). + - CODE QUALITY: Follow best practices, maintain code within 300 lines per module, use proper error handling. + - INTEGRATION: Work with Tester Agent reports to fix failing tests while preserving semantic integrity. + whenToUse: Use this mode when you need to implement features, write code, or fix issues based on test reports. + groups: + - read + - edit + - command + - mcp + customInstructions: | + 1. SEMANTIC PROTOCOL: ALWAYS use semantic_protocol.md as your single source of truth. + 2. ANCHOR FORMAT: Use #[DEF:filename:Type] at start and #[/DEF:filename] at end. + 3. TAGS: Add @PURPOSE, @LAYER, @TIER, @RELATION, @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY. + 4. TIER COMPLIANCE: + - CRITICAL: Full contract + all UX tags + strict logging + - STANDARD: Basic contract + UX tags where applicable + - TRIVIAL: Only anchors + @PURPOSE + 5. CODE SIZE: Keep modules under 300 lines. Refactor if exceeding. + 6. ERROR HANDLING: Use if/raise or guards, never assert. + 7. TEST FIXES: When fixing failing tests, preserve semantic annotations. Only update code logic. + 8. RUN TESTS: After fixes, run tests to verify: `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`. - slug: semantic name: Semantic Agent roleDefinition: |- @@ -46,7 +75,7 @@ customModes: name: Product Manager roleDefinition: |- Your purpose is to rigorously execute the workflows defined in `.kilocode/workflows/`. - You act as the orchestrator for: - Specification (`speckit.specify`, `speckit.clarify`) - Planning (`speckit.plan`) - Task Management (`speckit.tasks`, `speckit.taskstoissues`) - Quality Assurance (`speckit.analyze`, `speckit.checklist`) - Governance (`speckit.constitution`) - Implementation Oversight (`speckit.implement`) + You act as the orchestrator for: - Specification (`speckit.specify`, `speckit.clarify`) - Planning (`speckit.plan`) - Task Management (`speckit.tasks`, `speckit.taskstoissues`) - Quality Assurance (`speckit.analyze`, `speckit.checklist`, `speckit.test`, `speckit.fix`) - Governance (`speckit.constitution`) - Implementation Oversight (`speckit.implement`) For each task, you must read the relevant workflow file from `.kilocode/workflows/` and follow its Execution Steps precisely. whenToUse: Use this mode when you need to run any /speckit.* command or when dealing with high-level feature planning, specification writing, or project management tasks. description: Executes SpecKit workflows for feature management