--- description: Fix failing tests and implementation issues based on test reports --- ## User Input ```text $ARGUMENTS ``` You **MUST** consider the user input before proceeding (if not empty). ## Goal Analyze test failure reports, identify root causes, and fix implementation issues while preserving semantic protocol compliance. ## Operating Constraints 1. **USE CODER MODE**: Always switch to `coder` mode for code fixes 2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic. 3. **TEST DATA**: If tests use @TEST_ fixtures, preserve them when fixing 4. **NO DELETION**: Never delete existing tests or semantic annotations 5. **REPORT FIRST**: Always write a fix report before making changes ## Execution Steps ### 1. Load Test Report **Required**: Test report file path (e.g., `specs//tests/reports/2026-02-19-report.md`) **Parse the report for**: - Failed test cases - Error messages - Stack traces - Expected vs actual behavior - Affected modules/files ### 2. Analyze Root Causes For each failed test: 1. **Read the test file** to understand what it's testing 2. **Read the implementation file** to find the bug 3. **Check semantic protocol compliance**: - Does the implementation have correct [DEF] anchors? - Are @TAGS (@PRE, @POST, @UX_STATE, etc.) present? - Does the code match the TIER requirements? 4. **Identify the fix**: - Logic error in implementation - Missing error handling - Incorrect API usage - State management issue ### 3. Write Fix Report Create a structured fix report: ```markdown # Fix Report: [FEATURE] **Date**: [YYYY-MM-DD] **Report**: [Test Report Path] **Fixer**: Coder Agent ## Summary - Total Failed Tests: [X] - Total Fixed: [X] - Total Skipped: [X] ## Failed Tests Analysis ### Test: [Test Name] **File**: `path/to/test.py` **Error**: [Error message] **Root Cause**: [Explanation of why test failed] **Fix Required**: [Description of fix] **Status**: [Pending/In Progress/Completed] ## Fixes Applied ### Fix 1: [Description] **Affected File**: `path/to/file.py` **Test Affected**: `[Test Name]` **Changes**: ```diff <<<<<<< SEARCH [Original Code] ======= [Fixed Code] >>>>>>> REPLACE ``` **Verification**: [How to verify fix works] **Semantic Integrity**: [Confirmed annotations preserved] ## Next Steps - [ ] Run tests to verify fix: `cd backend && .venv/bin/python3 -m pytest` - [ ] Check for related failing tests - [ ] Update test documentation if needed ``` ### 4. Apply Fixes (in Coder Mode) Switch to `coder` mode and apply fixes: 1. **Read the implementation file** to get exact content 2. **Apply the fix** using apply_diff 3. **Preserve all semantic annotations**: - Keep [DEF:...] and [/DEF:...] anchors - Keep all @TAGS (@PURPOSE, @LAYER, @TIER, @RELATION, @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY) 4. **Only update code logic** to fix the bug 5. **Run tests** to verify the fix ### 5. Verification After applying fixes: 1. **Run tests**: ```bash cd backend && .venv/bin/python3 -m pytest -v ``` or ```bash cd frontend && npm run test ``` 2. **Check test results**: - Failed tests should now pass - No new tests should fail - Coverage should not decrease 3. **Update fix report** with results: - Mark fixes as completed - Add verification steps - Note any remaining issues ## Output Generate final fix report: ```markdown # Fix Report: [FEATURE] - COMPLETED **Date**: [YYYY-MM-DD] **Report**: [Test Report Path] **Fixer**: Coder Agent ## Summary - Total Failed Tests: [X] - Total Fixed: [X] ✅ - Total Skipped: [X] ## Fixes Applied ### Fix 1: [Description] ✅ **Affected File**: `path/to/file.py` **Test Affected**: `[Test Name]` **Changes**: [Summary of changes] **Verification**: All tests pass ✅ **Semantic Integrity**: Preserved ✅ ## Test Results ``` [Full test output showing all passing tests] ``` ## Recommendations - [ ] Monitor for similar issues - [ ] Update documentation if needed - [ ] Consider adding more tests for edge cases ## Related Files - Test Report: [path] - Implementation: [path] - Test File: [path] ``` ## Context for Fixing $ARGUMENTS