4.7 KiB
description
| description |
|---|
| Generate tests, manage test documentation, and ensure maximum code coverage |
User Input
$ARGUMENTS
You MUST consider the user input before proceeding (if not empty).
Goal
Execute full testing cycle: analyze code for testable modules, write tests with proper coverage, maintain test documentation, and ensure no test duplication or deletion.
Operating Constraints
- NEVER delete existing tests - Only update if they fail due to bugs in the test or implementation
- NEVER duplicate tests - Check existing tests first before creating new ones
- Use TEST_DATA fixtures - For CRITICAL tier modules, read @TEST_DATA from .ai/standards/semantics.md
- Co-location required - Write tests in
__tests__directories relative to the code being tested
Execution Steps
1. Analyze Context
Run .specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks from repo root and parse FEATURE_DIR and AVAILABLE_DOCS.
Determine:
- FEATURE_DIR - where the feature is located
- TASKS_FILE - path to tasks.md
- Which modules need testing based on task status
2. Load Relevant Artifacts
From tasks.md:
- Identify completed implementation tasks (not test tasks)
- Extract file paths that need tests
From .ai/standards/semantics.md:
- Read @TIER annotations for modules
- For CRITICAL modules: Read @TEST_ fixtures
From existing tests:
- Scan
__tests__directories for existing tests - Identify test patterns and coverage gaps
3. Test Coverage Analysis
Create coverage matrix:
| Module | File | Has Tests | TIER | TEST_DATA Available |
|---|---|---|---|---|
| ... | ... | ... | ... | ... |
4. Write Tests (TDD Approach)
For each module requiring tests:
- Check existing tests: Scan
__tests__/for duplicates - Read TEST_DATA: If CRITICAL tier, read @TEST_ from semantics header
- Write test: Follow co-location strategy
- Python:
src/module/__tests__/test_module.py - Svelte:
src/lib/components/__tests__/test_component.test.js
- Python:
- Use mocks: Use
unittest.mock.MagicMockfor external dependencies
4a. UX Contract Testing (Frontend Components)
For Svelte components with @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY tags:
- Parse UX tags: Read component file and extract all
@UX_*annotations - Generate UX tests: Create tests for each UX state transition
// Example: Testing @UX_STATE: Idle -> Expanded it('should transition from Idle to Expanded on toggle click', async () => { render(Sidebar); const toggleBtn = screen.getByRole('button', { name: /toggle/i }); await fireEvent.click(toggleBtn); expect(screen.getByTestId('sidebar')).toHaveClass('expanded'); }); - Test @UX_FEEDBACK: Verify visual feedback (toast, shake, color changes)
- Test @UX_RECOVERY: Verify error recovery mechanisms (retry, clear input)
- Use @UX_TEST fixtures: If component has
@UX_TESTtags, use them as test specifications
UX Test Template:
// [DEF:__tests__/test_Component:Module]
// @RELATION: VERIFIES -> ../Component.svelte
// @PURPOSE: Test UX states and transitions
describe('Component UX States', () => {
// @UX_STATE: Idle -> {action: click, expected: Active}
it('should transition Idle -> Active on click', async () => { ... });
// @UX_FEEDBACK: Toast on success
it('should show toast on successful action', async () => { ... });
// @UX_RECOVERY: Retry on error
it('should allow retry on error', async () => { ... });
});
5. Test Documentation
Create/update documentation in specs/<feature>/tests/:
tests/
├── README.md # Test strategy and overview
├── coverage.md # Coverage matrix and reports
└── reports/
└── YYYY-MM-DD-report.md
6. Execute Tests
Run tests and report results:
Backend:
cd backend && .venv/bin/python3 -m pytest -v
Frontend:
cd frontend && npm run test
7. Update Tasks
Mark test tasks as completed in tasks.md with:
- Test file path
- Coverage achieved
- Any issues found
Output
Generate test execution report:
# Test Report: [FEATURE]
**Date**: [YYYY-MM-DD]
**Executed by**: Tester Agent
## Coverage Summary
| Module | Tests | Coverage % |
|--------|-------|------------|
| ... | ... | ... |
## Test Results
- Total: [X]
- Passed: [X]
- Failed: [X]
- Skipped: [X]
## Issues Found
| Test | Error | Resolution |
|------|-------|------------|
| ... | ... | ... |
## Next Steps
- [ ] Fix failed tests
- [ ] Add more coverage for [module]
- [ ] Review TEST_DATA fixtures
Context for Testing
$ARGUMENTS