From fdb944f1233a40fd19c0b59e956a01e2d1a5f4a6 Mon Sep 17 00:00:00 2001 From: busya Date: Thu, 19 Feb 2026 12:44:31 +0300 Subject: [PATCH] Test logic update --- .kilocode/workflows/speckit.test.md | 220 +++++++++++++++-------- .kilocodemodes | 23 ++- .specify/templates/plan-template.md | 11 ++ .specify/templates/spec-template.md | 49 +++++ .specify/templates/test-docs-template.md | 152 ++++++++++++++++ semantic_protocol.md | 20 ++- 6 files changed, 388 insertions(+), 87 deletions(-) create mode 100644 .specify/templates/test-docs-template.md diff --git a/.kilocode/workflows/speckit.test.md b/.kilocode/workflows/speckit.test.md index f448600..45e40d8 100644 --- a/.kilocode/workflows/speckit.test.md +++ b/.kilocode/workflows/speckit.test.md @@ -1,17 +1,7 @@ -№ **speckit.tasks.md** -### Modified Workflow +--- + +description: Generate tests, manage test documentation, and ensure maximum code coverage -```markdown -description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts. -handoffs: - - label: Analyze For Consistency - agent: speckit.analyze - prompt: Run a project analysis for consistency - send: true - - label: Implement Project - agent: speckit.implement - prompt: Start the implementation in phases - send: true --- ## User Input @@ -22,95 +12,167 @@ $ARGUMENTS You **MUST** consider the user input before proceeding (if not empty). -## Outline +## Goal -1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. +Execute full testing cycle: analyze code for testable modules, write tests with proper coverage, maintain test documentation, and ensure no test duplication or deletion. -2. **Load design documents**: Read from FEATURE_DIR: - - **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities), ux_reference.md (experience source of truth) - - **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions) +## Operating Constraints -3. **Execute task generation workflow**: - - **Architecture Analysis (CRITICAL)**: Scan existing codebase for patterns (DI, Auth, ORM). - - Load plan.md/spec.md. - - Generate tasks organized by user story. - - **Apply Fractal Co-location**: Ensure all unit tests are mapped to `__tests__` subdirectories relative to the code. - - Validate task completeness. +1. **NEVER delete existing tests** - Only update if they fail due to bugs in the test or implementation +2. **NEVER duplicate tests** - Check existing tests first before creating new ones +3. **Use TEST_DATA fixtures** - For CRITICAL tier modules, read @TEST_DATA from semantic_protocol.md +4. **Co-location required** - Write tests in `__tests__` directories relative to the code being tested -4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure. - - Phase 1: Context & Setup. - - Phase 2: Foundational tasks. - - Phase 3+: User Stories (Priority order). - - Final Phase: Polish. - - **Strict Constraint**: Ensure tasks follow the Co-location and Mocking rules below. +## Execution Steps -5. **Report**: Output path to generated tasks.md and summary. +### 1. Analyze Context -Context for task generation: $ARGUMENTS +Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS. -## Task Generation Rules +Determine: +- FEATURE_DIR - where the feature is located +- TASKS_FILE - path to tasks.md +- Which modules need testing based on task status -**CRITICAL**: Tasks MUST be actionable, specific, architecture-aware, and context-local. +### 2. Load Relevant Artifacts -### Implementation & Testing Constraints (ANTI-LOOP & CO-LOCATION) +**From tasks.md:** +- Identify completed implementation tasks (not test tasks) +- Extract file paths that need tests -To prevent infinite debugging loops and context fragmentation, apply these rules: +**From semantic_protocol.md:** +- Read @TIER annotations for modules +- For CRITICAL modules: Read @TEST_DATA fixtures -1. **Fractal Co-location Strategy (MANDATORY)**: - - **Rule**: Unit tests MUST live next to the code they verify. - - **Forbidden**: Do NOT create unit tests in root `tests/` or `backend/tests/`. Those are for E2E/Integration only. - - **Pattern (Python)**: - - Source: `src/domain/order/processing.py` - - Test Task: `Create tests in src/domain/order/__tests__/test_processing.py` - - **Pattern (Frontend)**: - - Source: `src/lib/components/UserCard.svelte` - - Test Task: `Create tests in src/lib/components/__tests__/UserCard.test.ts` +**From existing tests:** +- Scan `__tests__` directories for existing tests +- Identify test patterns and coverage gaps -2. **Semantic Relations**: - - Test generation tasks must explicitly instruct to add the relation header: `# @RELATION: VERIFIES -> [TargetComponent]` +### 3. Test Coverage Analysis -3. **Strict Mocking for Unit Tests**: - - Any task creating Unit Tests MUST specify: *"Use `unittest.mock.MagicMock` for heavy dependencies (DB sessions, Auth). Do NOT instantiate real service classes."* +Create coverage matrix: -4. **Schema/Model Separation**: - - Explicitly separate tasks for ORM Models (SQLAlchemy) and Pydantic Schemas. +| Module | File | Has Tests | TIER | TEST_DATA Available | +|--------|------|-----------|------|-------------------| +| ... | ... | ... | ... | ... | -### UX Preservation (CRITICAL) +### 4. Write Tests (TDD Approach) -- **Source of Truth**: `ux_reference.md` is the absolute standard. -- **Verification Task**: You **MUST** add a specific task at the end of each User Story phase: `- [ ] Txxx [USx] Verify implementation matches ux_reference.md (Happy Path & Errors)` +For each module requiring tests: -### Checklist Format (REQUIRED) +1. **Check existing tests**: Scan `__tests__/` for duplicates +2. **Read TEST_DATA**: If CRITICAL tier, read @TEST_DATA from semantic_protocol.md +3. **Write test**: Follow co-location strategy + - Python: `src/module/__tests__/test_module.py` + - Svelte: `src/lib/components/__tests__/test_component.test.js` +4. **Use mocks**: Use `unittest.mock.MagicMock` for external dependencies -Every task MUST strictly follow this format: +### 4a. UX Contract Testing (Frontend Components) -```text -- [ ] [TaskID] [P?] [Story?] Description with file path +For Svelte components with `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tags: + +1. **Parse UX tags**: Read component file and extract all `@UX_*` annotations +2. **Generate UX tests**: Create tests for each UX state transition + ```javascript + // Example: Testing @UX_STATE: Idle -> Expanded + it('should transition from Idle to Expanded on toggle click', async () => { + render(Sidebar); + const toggleBtn = screen.getByRole('button', { name: /toggle/i }); + await fireEvent.click(toggleBtn); + expect(screen.getByTestId('sidebar')).toHaveClass('expanded'); + }); + ``` +3. **Test @UX_FEEDBACK**: Verify visual feedback (toast, shake, color changes) +4. **Test @UX_RECOVERY**: Verify error recovery mechanisms (retry, clear input) +5. **Use @UX_TEST fixtures**: If component has `@UX_TEST` tags, use them as test specifications + +**UX Test Template:** +```javascript +// [DEF:__tests__/test_Component:Module] +// @RELATION: VERIFIES -> ../Component.svelte +// @PURPOSE: Test UX states and transitions + +describe('Component UX States', () => { + // @UX_STATE: Idle -> {action: click, expected: Active} + it('should transition Idle -> Active on click', async () => { ... }); + + // @UX_FEEDBACK: Toast on success + it('should show toast on successful action', async () => { ... }); + + // @UX_RECOVERY: Retry on error + it('should allow retry on error', async () => { ... }); +}); ``` -**Examples**: -- ✅ `- [ ] T005 [US1] Create unit tests for OrderService in src/services/__tests__/test_order.py (Mock DB)` -- ✅ `- [ ] T006 [US1] Implement OrderService in src/services/order.py` -- ❌ `- [ ] T005 [US1] Create tests in backend/tests/test_order.py` (VIOLATION: Wrong location) +### 5. Test Documentation -### Task Organization & Phase Structure +Create/update documentation in `specs//tests/`: -**Phase 1: Context & Setup** -- **Goal**: Prepare environment and understand existing patterns. -- **Mandatory Task**: `- [ ] T001 Analyze existing project structure, auth patterns, and `conftest.py` location` +``` +tests/ +├── README.md # Test strategy and overview +├── coverage.md # Coverage matrix and reports +└── reports/ + └── YYYY-MM-DD-report.md +``` -**Phase 2: Foundational (Data & Core)** -- Database Models (ORM). -- Pydantic Schemas (DTOs). -- Core Service interfaces. +### 6. Execute Tests -**Phase 3+: User Stories (Iterative)** -- **Step 1: Isolation Tests (Co-located)**: - - `- [ ] Txxx [USx] Create unit tests for [Component] in [Path]/__tests__/test_[name].py` - - *Note: Specify using MagicMock for external deps.* -- **Step 2: Implementation**: Services -> Endpoints. -- **Step 3: Integration**: Wire up real dependencies (if E2E tests requested). -- **Step 4: UX Verification**. +Run tests and report results: -**Final Phase: Polish** -- Linting, formatting, final manual verify. +**Backend:** +```bash +cd backend && .venv/bin/python3 -m pytest -v +``` + +**Frontend:** +```bash +cd frontend && npm run test +``` + +### 7. Update Tasks + +Mark test tasks as completed in tasks.md with: +- Test file path +- Coverage achieved +- Any issues found + +## Output + +Generate test execution report: + +```markdown +# Test Report: [FEATURE] + +**Date**: [YYYY-MM-DD] +**Executed by**: Tester Agent + +## Coverage Summary + +| Module | Tests | Coverage % | +|--------|-------|------------| +| ... | ... | ... | + +## Test Results + +- Total: [X] +- Passed: [X] +- Failed: [X] +- Skipped: [X] + +## Issues Found + +| Test | Error | Resolution | +|------|-------|------------| +| ... | ... | ... | + +## Next Steps + +- [ ] Fix failed tests +- [ ] Add more coverage for [module] +- [ ] Review TEST_DATA fixtures +``` + +## Context for Testing + +$ARGUMENTS diff --git a/.kilocodemodes b/.kilocodemodes index 26bdf0b..d6ce9b9 100644 --- a/.kilocodemodes +++ b/.kilocodemodes @@ -1,18 +1,31 @@ customModes: - slug: tester name: Tester - description: QA and Plan Verification Specialist + description: QA and Test Engineer - Full Testing Cycle roleDefinition: |- - You are Kilo Code, acting as a QA and Verification Specialist. Your primary goal is to validate that the project implementation aligns strictly with the defined specifications and task plans. - Your responsibilities include: - Reading and analyzing task plans and specifications (typically in the `specs/` directory). - Verifying that implemented code matches the requirements. - Executing tests and validating system behavior via CLI or Browser. - Updating the status of tasks in the plan files (e.g., marking checkboxes [x]) as they are verified. - Identifying and reporting missing features or bugs. - whenToUse: Use this mode when you need to audit the progress of a project, verify completed tasks against the plan, run quality assurance checks, or update the status of task lists in specification documents. + You are Kilo Code, acting as a QA and Test Engineer. Your primary goal is to ensure maximum test coverage, maintain test quality, and preserve existing tests. + Your responsibilities include: + - WRITING TESTS: Create comprehensive unit tests following TDD principles, using co-location strategy (`__tests__` directories). + - TEST DATA: For CRITICAL tier modules, you MUST use @TEST_DATA fixtures defined in semantic_protocol.md. Read and apply them in your tests. + - DOCUMENTATION: Maintain test documentation in `specs//tests/` directory with coverage reports and test case specifications. + - VERIFICATION: Run tests, analyze results, and ensure all tests pass. + - PROTECTION: NEVER delete existing tests. NEVER duplicate tests - check for existing tests first. + whenToUse: Use this mode when you need to write tests, run test coverage analysis, or perform quality assurance with full testing cycle. groups: - read - edit - command - browser - mcp - customInstructions: 1. Always begin by loading the relevant plan or task list from the `specs/` directory. 2. Do not assume a task is done just because it is checked; verify the code or functionality first if asked to audit. 3. When updating task lists, ensure you only mark items as complete if you have verified them. + customInstructions: | + 1. CO-LOCATION: Write tests in `__tests__` subdirectories relative to the code being tested (Fractal Strategy). + 2. TEST DATA MANDATORY: For CRITICAL modules, read @TEST_DATA from semantic_protocol.md and use fixtures in tests. + 3. UX CONTRACT TESTING: For Svelte components with @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY tags, create comprehensive UX tests. + 4. NO DELETION: Never delete existing tests - only update if they fail due to legitimate bugs. + 5. NO DUPLICATION: Check existing tests in `__tests__/` before creating new ones. Reuse existing test patterns. + 6. DOCUMENTATION: Create test reports in `specs//tests/reports/YYYY-MM-DD-report.md`. + 7. COVERAGE: Aim for maximum coverage but prioritize CRITICAL and STANDARD tier modules. + 8. RUN TESTS: Execute tests using `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`. - slug: semantic name: Semantic Agent roleDefinition: |- diff --git a/.specify/templates/plan-template.md b/.specify/templates/plan-template.md index 111cd8b..91c29b2 100644 --- a/.specify/templates/plan-template.md +++ b/.specify/templates/plan-template.md @@ -102,3 +102,14 @@ directories captured above] |-----------|------------|-------------------------------------| | [e.g., 4th project] | [current need] | [why 3 projects insufficient] | | [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] | + +## Test Data Reference + +> **For CRITICAL tier components, reference test fixtures from spec.md** + +| Component | TIER | Fixture Name | Location | +|-----------|------|--------------|----------| +| [e.g., DashboardAPI] | CRITICAL | valid_dashboard | spec.md#test-data-fixtures | +| [e.g., TaskDrawer] | CRITICAL | task_states | spec.md#test-data-fixtures | + +**Note**: Tester Agent MUST use these fixtures when writing unit tests for CRITICAL modules. See `semantic_protocol.md` for @TEST_DATA syntax. diff --git a/.specify/templates/spec-template.md b/.specify/templates/spec-template.md index 8419175..013d18d 100644 --- a/.specify/templates/spec-template.md +++ b/.specify/templates/spec-template.md @@ -114,3 +114,52 @@ - **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"] - **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"] - **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"] + +--- + +## Test Data Fixtures *(recommended for CRITICAL components)* + + + +### Fixtures + +```yaml +# Example fixture format +fixture_name: + description: "Description of this test data" + data: + # JSON or YAML data structure +``` + +### Example: Dashboard API + +```yaml +valid_dashboard: + description: "Valid dashboard object for API responses" + data: + id: 1 + title: "Sales Report" + slug: "sales" + git_status: + branch: "main" + sync_status: "OK" + last_task: + task_id: "task-123" + status: "SUCCESS" + +empty_dashboards: + description: "Empty dashboard list response" + data: + dashboards: [] + total: 0 + page: 1 + +error_not_found: + description: "404 error response" + data: + detail: "Dashboard not found" +``` diff --git a/.specify/templates/test-docs-template.md b/.specify/templates/test-docs-template.md new file mode 100644 index 0000000..f8cbd11 --- /dev/null +++ b/.specify/templates/test-docs-template.md @@ -0,0 +1,152 @@ +--- + +description: "Test documentation template for feature implementation" + +--- + +# Test Documentation: [FEATURE NAME] + +**Feature**: [Link to spec.md] +**Created**: [DATE] +**Updated**: [DATE] +**Tester**: [Agent/User Name] + +--- + +## Overview + +[Brief description of what this feature does and why testing is important] + +**Test Strategy**: +- [ ] Unit Tests (co-located in `__tests__/` directories) +- [ ] Integration Tests (if needed) +- [ ] E2E Tests (if critical user flows) +- [ ] Contract Tests (for API endpoints) + +--- + +## Test Coverage Matrix + +| Module | File | Unit Tests | Coverage % | Status | +|--------|------|------------|------------|--------| +| [Module Name] | `path/to/file.py` | [x] | [XX%] | [Pass/Fail] | +| [Module Name] | `path/to/file.svelte` | [x] | [XX%] | [Pass/Fail] | + +--- + +## Test Cases + +### [Module Name] + +**Target File**: `path/to/module.py` + +| ID | Test Case | Type | Expected Result | Status | +|----|-----------|------|------------------|--------| +| TC001 | [Description] | [Unit/Integration] | [Expected] | [Pass/Fail] | +| TC002 | [Description] | [Unit/Integration] | [Expected] | [Pass/Fail] | + +--- + +## Test Execution Reports + +### Report [YYYY-MM-DD] + +**Executed by**: [Tester] +**Duration**: [X] minutes +**Result**: [Pass/Fail] + +**Summary**: +- Total Tests: [X] +- Passed: [X] +- Failed: [X] +- Skipped: [X] + +**Failed Tests**: +| Test | Error | Resolution | +|------|-------|-------------| +| [Test Name] | [Error Message] | [How Fixed] | + +--- + +## Anti-Patterns & Rules + +### ✅ DO + +1. Write tests BEFORE implementation (TDD approach) +2. Use co-location: `src/module/__tests__/test_module.py` +3. Use MagicMock for external dependencies (DB, Auth, APIs) +4. Include semantic annotations: `# @RELATION: VERIFIES -> module.name` +5. Test edge cases and error conditions +6. **Test UX states** for Svelte components (@UX_STATE, @UX_FEEDBACK, @UX_RECOVERY) + +### ❌ DON'T + +1. Delete existing tests (only update if they fail) +2. Duplicate tests - check for existing tests first +3. Test implementation details, not behavior +4. Use real external services in unit tests +5. Skip error handling tests +6. **Skip UX contract tests** for CRITICAL frontend components + +--- + +## UX Contract Testing (Frontend) + +### UX States Coverage + +| Component | @UX_STATE | @UX_FEEDBACK | @UX_RECOVERY | Tests | +|-----------|-----------|--------------|--------------|-------| +| [Component] | [states] | [feedback] | [recovery] | [status] | + +### UX Test Cases + +| ID | Component | UX Tag | Test Action | Expected Result | Status | +|----|-----------|--------|-------------|-----------------|--------| +| UX001 | [Component] | @UX_STATE: Idle | [action] | [expected] | [Pass/Fail] | +| UX002 | [Component] | @UX_FEEDBACK | [action] | [expected] | [Pass/Fail] | +| UX003 | [Component] | @UX_RECOVERY | [action] | [expected] | [Pass/Fail] | + +### UX Test Examples + +```javascript +// Testing @UX_STATE transition +it('should transition from Idle to Loading on submit', async () => { + render(FormComponent); + await fireEvent.click(screen.getByText('Submit')); + expect(screen.getByTestId('form')).toHaveClass('loading'); +}); + +// Testing @UX_FEEDBACK +it('should show error toast on validation failure', async () => { + render(FormComponent); + await fireEvent.click(screen.getByText('Submit')); + expect(screen.getByRole('alert')).toHaveTextContent('Validation error'); +}); + +// Testing @UX_RECOVERY +it('should allow retry after error', async () => { + render(FormComponent); + // Trigger error state + await fireEvent.click(screen.getByText('Submit')); + // Click retry + await fireEvent.click(screen.getByText('Retry')); + expect(screen.getByTestId('form')).not.toHaveClass('error'); +}); +``` + +--- + +## Notes + +- [Additional notes about testing approach] +- [Known issues or limitations] +- [Recommendations for future testing] + +--- + +## Related Documents + +- [spec.md](./spec.md) +- [plan.md](./plan.md) +- [tasks.md](./tasks.md) +- [contracts/](./contracts/) \ No newline at end of file diff --git a/semantic_protocol.md b/semantic_protocol.md index ad87871..3dc1fad 100755 --- a/semantic_protocol.md +++ b/semantic_protocol.md @@ -54,16 +54,30 @@ **@UX_FEEDBACK:** Реакция системы (Toast, Shake, Red Border). **@UX_RECOVERY:** Механизм исправления ошибки пользователем (Retry, Clear Input). +**UX Testing Tags (для Tester Agent):** + **@UX_TEST:** Спецификация теста для UX состояния. + Формат: `@UX_TEST: [state] -> {action, expected}` + Пример: `@UX_TEST: Idle -> {click: toggle, expected: isExpanded=true}` + Правило: Не используй `assert` в коде, используй `if/raise` или `guards`. #### V. АДАПТАЦИЯ (TIERS) Определяется тегом `@TIER` в Header. -1. **CRITICAL** (Core/Security/**Complex UI**): +1. **CRITICAL** (Core/Security/**Complex UI**): - Требование: Полный контракт (включая **все @UX теги**), Граф, Инварианты, Строгие Логи. -2. **STANDARD** (BizLogic/**Forms**): + - **@TEST_DATA**: Обязательные эталонные данные для тестирования. Формат: + ``` + @TEST_DATA: fixture_name -> {JSON_PATH} | {INLINE_DATA} + ``` + Примеры: + - `@TEST_DATA: valid_user -> {./fixtures/users.json#valid}` + - `@TEST_DATA: empty_state -> {"dashboards": [], "total": 0}` + - Tester Agent **ОБЯЗАН** использовать @TEST_DATA при написании тестов для CRITICAL модулей. +2. **STANDARD** (BizLogic/**Forms**): - Требование: Базовый контракт (@PURPOSE, @UX_STATE), Логи, @RELATION. -3. **TRIVIAL** (DTO/**Atoms**): + - @TEST_DATA: Рекомендуется для Complex Forms. +3. **TRIVIAL** (DTO/**Atoms**): - Требование: Только Якоря [DEF] и @PURPOSE. #### VI. ЛОГИРОВАНИЕ (BELIEF STATE & TASK LOGS)