Files
ss-tools/.agent/workflows/speckit.test.md
2026-02-25 13:29:14 +03:00

4.7 KiB

description
description
Generate tests, manage test documentation, and ensure maximum code coverage

User Input

$ARGUMENTS

You MUST consider the user input before proceeding (if not empty).

Goal

Execute full testing cycle: analyze code for testable modules, write tests with proper coverage, maintain test documentation, and ensure no test duplication or deletion.

Operating Constraints

  1. NEVER delete existing tests - Only update if they fail due to bugs in the test or implementation
  2. NEVER duplicate tests - Check existing tests first before creating new ones
  3. Use TEST_DATA fixtures - For CRITICAL tier modules, read @TEST_DATA from .specify/memory/semantics.md
  4. Co-location required - Write tests in __tests__ directories relative to the code being tested

Execution Steps

1. Analyze Context

Run .specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks from repo root and parse FEATURE_DIR and AVAILABLE_DOCS.

Determine:

  • FEATURE_DIR - where the feature is located
  • TASKS_FILE - path to tasks.md
  • Which modules need testing based on task status

2. Load Relevant Artifacts

From tasks.md:

  • Identify completed implementation tasks (not test tasks)
  • Extract file paths that need tests

From .specify/memory/semantics.md:

  • Read @TIER annotations for modules
  • For CRITICAL modules: Read @TEST_DATA fixtures

From existing tests:

  • Scan __tests__ directories for existing tests
  • Identify test patterns and coverage gaps

3. Test Coverage Analysis

Create coverage matrix:

Module File Has Tests TIER TEST_DATA Available
... ... ... ... ...

4. Write Tests (TDD Approach)

For each module requiring tests:

  1. Check existing tests: Scan __tests__/ for duplicates
  2. Read TEST_DATA: If CRITICAL tier, read @TEST_DATA from .specify/memory/semantics.md
  3. Write test: Follow co-location strategy
    • Python: src/module/__tests__/test_module.py
    • Svelte: src/lib/components/__tests__/test_component.test.js
  4. Use mocks: Use unittest.mock.MagicMock for external dependencies

4a. UX Contract Testing (Frontend Components)

For Svelte components with @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY tags:

  1. Parse UX tags: Read component file and extract all @UX_* annotations
  2. Generate UX tests: Create tests for each UX state transition
    // Example: Testing @UX_STATE: Idle -> Expanded
    it('should transition from Idle to Expanded on toggle click', async () => {
      render(Sidebar);
      const toggleBtn = screen.getByRole('button', { name: /toggle/i });
      await fireEvent.click(toggleBtn);
      expect(screen.getByTestId('sidebar')).toHaveClass('expanded');
    });
    
  3. Test @UX_FEEDBACK: Verify visual feedback (toast, shake, color changes)
  4. Test @UX_RECOVERY: Verify error recovery mechanisms (retry, clear input)
  5. Use @UX_TEST fixtures: If component has @UX_TEST tags, use them as test specifications

UX Test Template:

// [DEF:__tests__/test_Component:Module]
// @RELATION: VERIFIES -> ../Component.svelte
// @PURPOSE: Test UX states and transitions

describe('Component UX States', () => {
  // @UX_STATE: Idle -> {action: click, expected: Active}
  it('should transition Idle -> Active on click', async () => { ... });
  
  // @UX_FEEDBACK: Toast on success
  it('should show toast on successful action', async () => { ... });
  
  // @UX_RECOVERY: Retry on error
  it('should allow retry on error', async () => { ... });
});

5. Test Documentation

Create/update documentation in specs/<feature>/tests/:

tests/
├── README.md           # Test strategy and overview
├── coverage.md         # Coverage matrix and reports
└── reports/
    └── YYYY-MM-DD-report.md

6. Execute Tests

Run tests and report results:

Backend:

cd backend && .venv/bin/python3 -m pytest -v

Frontend:

cd frontend && npm run test

7. Update Tasks

Mark test tasks as completed in tasks.md with:

  • Test file path
  • Coverage achieved
  • Any issues found

Output

Generate test execution report:

# Test Report: [FEATURE]

**Date**: [YYYY-MM-DD]
**Executed by**: Tester Agent

## Coverage Summary

| Module | Tests | Coverage % |
|--------|-------|------------|
| ... | ... | ... |

## Test Results

- Total: [X]
- Passed: [X]
- Failed: [X]
- Skipped: [X]

## Issues Found

| Test | Error | Resolution |
|------|-------|------------|
| ... | ... | ... |

## Next Steps

- [ ] Fix failed tests
- [ ] Add more coverage for [module]
- [ ] Review TEST_DATA fixtures

Context for Testing

$ARGUMENTS