Test logic update
This commit is contained in:
@@ -1,17 +1,7 @@
|
|||||||
№ **speckit.tasks.md**
|
---
|
||||||
### Modified Workflow
|
|
||||||
|
description: Generate tests, manage test documentation, and ensure maximum code coverage
|
||||||
|
|
||||||
```markdown
|
|
||||||
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
|
||||||
handoffs:
|
|
||||||
- label: Analyze For Consistency
|
|
||||||
agent: speckit.analyze
|
|
||||||
prompt: Run a project analysis for consistency
|
|
||||||
send: true
|
|
||||||
- label: Implement Project
|
|
||||||
agent: speckit.implement
|
|
||||||
prompt: Start the implementation in phases
|
|
||||||
send: true
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## User Input
|
## User Input
|
||||||
@@ -22,95 +12,167 @@ $ARGUMENTS
|
|||||||
|
|
||||||
You **MUST** consider the user input before proceeding (if not empty).
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
## Outline
|
## Goal
|
||||||
|
|
||||||
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
|
Execute full testing cycle: analyze code for testable modules, write tests with proper coverage, maintain test documentation, and ensure no test duplication or deletion.
|
||||||
|
|
||||||
2. **Load design documents**: Read from FEATURE_DIR:
|
## Operating Constraints
|
||||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities), ux_reference.md (experience source of truth)
|
|
||||||
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions)
|
|
||||||
|
|
||||||
3. **Execute task generation workflow**:
|
1. **NEVER delete existing tests** - Only update if they fail due to bugs in the test or implementation
|
||||||
- **Architecture Analysis (CRITICAL)**: Scan existing codebase for patterns (DI, Auth, ORM).
|
2. **NEVER duplicate tests** - Check existing tests first before creating new ones
|
||||||
- Load plan.md/spec.md.
|
3. **Use TEST_DATA fixtures** - For CRITICAL tier modules, read @TEST_DATA from semantic_protocol.md
|
||||||
- Generate tasks organized by user story.
|
4. **Co-location required** - Write tests in `__tests__` directories relative to the code being tested
|
||||||
- **Apply Fractal Co-location**: Ensure all unit tests are mapped to `__tests__` subdirectories relative to the code.
|
|
||||||
- Validate task completeness.
|
|
||||||
|
|
||||||
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure.
|
## Execution Steps
|
||||||
- Phase 1: Context & Setup.
|
|
||||||
- Phase 2: Foundational tasks.
|
|
||||||
- Phase 3+: User Stories (Priority order).
|
|
||||||
- Final Phase: Polish.
|
|
||||||
- **Strict Constraint**: Ensure tasks follow the Co-location and Mocking rules below.
|
|
||||||
|
|
||||||
5. **Report**: Output path to generated tasks.md and summary.
|
### 1. Analyze Context
|
||||||
|
|
||||||
Context for task generation: $ARGUMENTS
|
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS.
|
||||||
|
|
||||||
## Task Generation Rules
|
Determine:
|
||||||
|
- FEATURE_DIR - where the feature is located
|
||||||
|
- TASKS_FILE - path to tasks.md
|
||||||
|
- Which modules need testing based on task status
|
||||||
|
|
||||||
**CRITICAL**: Tasks MUST be actionable, specific, architecture-aware, and context-local.
|
### 2. Load Relevant Artifacts
|
||||||
|
|
||||||
### Implementation & Testing Constraints (ANTI-LOOP & CO-LOCATION)
|
**From tasks.md:**
|
||||||
|
- Identify completed implementation tasks (not test tasks)
|
||||||
|
- Extract file paths that need tests
|
||||||
|
|
||||||
To prevent infinite debugging loops and context fragmentation, apply these rules:
|
**From semantic_protocol.md:**
|
||||||
|
- Read @TIER annotations for modules
|
||||||
|
- For CRITICAL modules: Read @TEST_DATA fixtures
|
||||||
|
|
||||||
1. **Fractal Co-location Strategy (MANDATORY)**:
|
**From existing tests:**
|
||||||
- **Rule**: Unit tests MUST live next to the code they verify.
|
- Scan `__tests__` directories for existing tests
|
||||||
- **Forbidden**: Do NOT create unit tests in root `tests/` or `backend/tests/`. Those are for E2E/Integration only.
|
- Identify test patterns and coverage gaps
|
||||||
- **Pattern (Python)**:
|
|
||||||
- Source: `src/domain/order/processing.py`
|
|
||||||
- Test Task: `Create tests in src/domain/order/__tests__/test_processing.py`
|
|
||||||
- **Pattern (Frontend)**:
|
|
||||||
- Source: `src/lib/components/UserCard.svelte`
|
|
||||||
- Test Task: `Create tests in src/lib/components/__tests__/UserCard.test.ts`
|
|
||||||
|
|
||||||
2. **Semantic Relations**:
|
### 3. Test Coverage Analysis
|
||||||
- Test generation tasks must explicitly instruct to add the relation header: `# @RELATION: VERIFIES -> [TargetComponent]`
|
|
||||||
|
|
||||||
3. **Strict Mocking for Unit Tests**:
|
Create coverage matrix:
|
||||||
- Any task creating Unit Tests MUST specify: *"Use `unittest.mock.MagicMock` for heavy dependencies (DB sessions, Auth). Do NOT instantiate real service classes."*
|
|
||||||
|
|
||||||
4. **Schema/Model Separation**:
|
| Module | File | Has Tests | TIER | TEST_DATA Available |
|
||||||
- Explicitly separate tasks for ORM Models (SQLAlchemy) and Pydantic Schemas.
|
|--------|------|-----------|------|-------------------|
|
||||||
|
| ... | ... | ... | ... | ... |
|
||||||
|
|
||||||
### UX Preservation (CRITICAL)
|
### 4. Write Tests (TDD Approach)
|
||||||
|
|
||||||
- **Source of Truth**: `ux_reference.md` is the absolute standard.
|
For each module requiring tests:
|
||||||
- **Verification Task**: You **MUST** add a specific task at the end of each User Story phase: `- [ ] Txxx [USx] Verify implementation matches ux_reference.md (Happy Path & Errors)`
|
|
||||||
|
|
||||||
### Checklist Format (REQUIRED)
|
1. **Check existing tests**: Scan `__tests__/` for duplicates
|
||||||
|
2. **Read TEST_DATA**: If CRITICAL tier, read @TEST_DATA from semantic_protocol.md
|
||||||
|
3. **Write test**: Follow co-location strategy
|
||||||
|
- Python: `src/module/__tests__/test_module.py`
|
||||||
|
- Svelte: `src/lib/components/__tests__/test_component.test.js`
|
||||||
|
4. **Use mocks**: Use `unittest.mock.MagicMock` for external dependencies
|
||||||
|
|
||||||
Every task MUST strictly follow this format:
|
### 4a. UX Contract Testing (Frontend Components)
|
||||||
|
|
||||||
```text
|
For Svelte components with `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tags:
|
||||||
- [ ] [TaskID] [P?] [Story?] Description with file path
|
|
||||||
|
1. **Parse UX tags**: Read component file and extract all `@UX_*` annotations
|
||||||
|
2. **Generate UX tests**: Create tests for each UX state transition
|
||||||
|
```javascript
|
||||||
|
// Example: Testing @UX_STATE: Idle -> Expanded
|
||||||
|
it('should transition from Idle to Expanded on toggle click', async () => {
|
||||||
|
render(Sidebar);
|
||||||
|
const toggleBtn = screen.getByRole('button', { name: /toggle/i });
|
||||||
|
await fireEvent.click(toggleBtn);
|
||||||
|
expect(screen.getByTestId('sidebar')).toHaveClass('expanded');
|
||||||
|
});
|
||||||
|
```
|
||||||
|
3. **Test @UX_FEEDBACK**: Verify visual feedback (toast, shake, color changes)
|
||||||
|
4. **Test @UX_RECOVERY**: Verify error recovery mechanisms (retry, clear input)
|
||||||
|
5. **Use @UX_TEST fixtures**: If component has `@UX_TEST` tags, use them as test specifications
|
||||||
|
|
||||||
|
**UX Test Template:**
|
||||||
|
```javascript
|
||||||
|
// [DEF:__tests__/test_Component:Module]
|
||||||
|
// @RELATION: VERIFIES -> ../Component.svelte
|
||||||
|
// @PURPOSE: Test UX states and transitions
|
||||||
|
|
||||||
|
describe('Component UX States', () => {
|
||||||
|
// @UX_STATE: Idle -> {action: click, expected: Active}
|
||||||
|
it('should transition Idle -> Active on click', async () => { ... });
|
||||||
|
|
||||||
|
// @UX_FEEDBACK: Toast on success
|
||||||
|
it('should show toast on successful action', async () => { ... });
|
||||||
|
|
||||||
|
// @UX_RECOVERY: Retry on error
|
||||||
|
it('should allow retry on error', async () => { ... });
|
||||||
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Examples**:
|
### 5. Test Documentation
|
||||||
- ✅ `- [ ] T005 [US1] Create unit tests for OrderService in src/services/__tests__/test_order.py (Mock DB)`
|
|
||||||
- ✅ `- [ ] T006 [US1] Implement OrderService in src/services/order.py`
|
|
||||||
- ❌ `- [ ] T005 [US1] Create tests in backend/tests/test_order.py` (VIOLATION: Wrong location)
|
|
||||||
|
|
||||||
### Task Organization & Phase Structure
|
Create/update documentation in `specs/<feature>/tests/`:
|
||||||
|
|
||||||
**Phase 1: Context & Setup**
|
```
|
||||||
- **Goal**: Prepare environment and understand existing patterns.
|
tests/
|
||||||
- **Mandatory Task**: `- [ ] T001 Analyze existing project structure, auth patterns, and `conftest.py` location`
|
├── README.md # Test strategy and overview
|
||||||
|
├── coverage.md # Coverage matrix and reports
|
||||||
|
└── reports/
|
||||||
|
└── YYYY-MM-DD-report.md
|
||||||
|
```
|
||||||
|
|
||||||
**Phase 2: Foundational (Data & Core)**
|
### 6. Execute Tests
|
||||||
- Database Models (ORM).
|
|
||||||
- Pydantic Schemas (DTOs).
|
|
||||||
- Core Service interfaces.
|
|
||||||
|
|
||||||
**Phase 3+: User Stories (Iterative)**
|
Run tests and report results:
|
||||||
- **Step 1: Isolation Tests (Co-located)**:
|
|
||||||
- `- [ ] Txxx [USx] Create unit tests for [Component] in [Path]/__tests__/test_[name].py`
|
|
||||||
- *Note: Specify using MagicMock for external deps.*
|
|
||||||
- **Step 2: Implementation**: Services -> Endpoints.
|
|
||||||
- **Step 3: Integration**: Wire up real dependencies (if E2E tests requested).
|
|
||||||
- **Step 4: UX Verification**.
|
|
||||||
|
|
||||||
**Final Phase: Polish**
|
**Backend:**
|
||||||
- Linting, formatting, final manual verify.
|
```bash
|
||||||
|
cd backend && .venv/bin/python3 -m pytest -v
|
||||||
|
```
|
||||||
|
|
||||||
|
**Frontend:**
|
||||||
|
```bash
|
||||||
|
cd frontend && npm run test
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Update Tasks
|
||||||
|
|
||||||
|
Mark test tasks as completed in tasks.md with:
|
||||||
|
- Test file path
|
||||||
|
- Coverage achieved
|
||||||
|
- Any issues found
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Generate test execution report:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Test Report: [FEATURE]
|
||||||
|
|
||||||
|
**Date**: [YYYY-MM-DD]
|
||||||
|
**Executed by**: Tester Agent
|
||||||
|
|
||||||
|
## Coverage Summary
|
||||||
|
|
||||||
|
| Module | Tests | Coverage % |
|
||||||
|
|--------|-------|------------|
|
||||||
|
| ... | ... | ... |
|
||||||
|
|
||||||
|
## Test Results
|
||||||
|
|
||||||
|
- Total: [X]
|
||||||
|
- Passed: [X]
|
||||||
|
- Failed: [X]
|
||||||
|
- Skipped: [X]
|
||||||
|
|
||||||
|
## Issues Found
|
||||||
|
|
||||||
|
| Test | Error | Resolution |
|
||||||
|
|------|-------|------------|
|
||||||
|
| ... | ... | ... |
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
- [ ] Fix failed tests
|
||||||
|
- [ ] Add more coverage for [module]
|
||||||
|
- [ ] Review TEST_DATA fixtures
|
||||||
|
```
|
||||||
|
|
||||||
|
## Context for Testing
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|||||||
@@ -1,18 +1,31 @@
|
|||||||
customModes:
|
customModes:
|
||||||
- slug: tester
|
- slug: tester
|
||||||
name: Tester
|
name: Tester
|
||||||
description: QA and Plan Verification Specialist
|
description: QA and Test Engineer - Full Testing Cycle
|
||||||
roleDefinition: |-
|
roleDefinition: |-
|
||||||
You are Kilo Code, acting as a QA and Verification Specialist. Your primary goal is to validate that the project implementation aligns strictly with the defined specifications and task plans.
|
You are Kilo Code, acting as a QA and Test Engineer. Your primary goal is to ensure maximum test coverage, maintain test quality, and preserve existing tests.
|
||||||
Your responsibilities include: - Reading and analyzing task plans and specifications (typically in the `specs/` directory). - Verifying that implemented code matches the requirements. - Executing tests and validating system behavior via CLI or Browser. - Updating the status of tasks in the plan files (e.g., marking checkboxes [x]) as they are verified. - Identifying and reporting missing features or bugs.
|
Your responsibilities include:
|
||||||
whenToUse: Use this mode when you need to audit the progress of a project, verify completed tasks against the plan, run quality assurance checks, or update the status of task lists in specification documents.
|
- WRITING TESTS: Create comprehensive unit tests following TDD principles, using co-location strategy (`__tests__` directories).
|
||||||
|
- TEST DATA: For CRITICAL tier modules, you MUST use @TEST_DATA fixtures defined in semantic_protocol.md. Read and apply them in your tests.
|
||||||
|
- DOCUMENTATION: Maintain test documentation in `specs/<feature>/tests/` directory with coverage reports and test case specifications.
|
||||||
|
- VERIFICATION: Run tests, analyze results, and ensure all tests pass.
|
||||||
|
- PROTECTION: NEVER delete existing tests. NEVER duplicate tests - check for existing tests first.
|
||||||
|
whenToUse: Use this mode when you need to write tests, run test coverage analysis, or perform quality assurance with full testing cycle.
|
||||||
groups:
|
groups:
|
||||||
- read
|
- read
|
||||||
- edit
|
- edit
|
||||||
- command
|
- command
|
||||||
- browser
|
- browser
|
||||||
- mcp
|
- mcp
|
||||||
customInstructions: 1. Always begin by loading the relevant plan or task list from the `specs/` directory. 2. Do not assume a task is done just because it is checked; verify the code or functionality first if asked to audit. 3. When updating task lists, ensure you only mark items as complete if you have verified them.
|
customInstructions: |
|
||||||
|
1. CO-LOCATION: Write tests in `__tests__` subdirectories relative to the code being tested (Fractal Strategy).
|
||||||
|
2. TEST DATA MANDATORY: For CRITICAL modules, read @TEST_DATA from semantic_protocol.md and use fixtures in tests.
|
||||||
|
3. UX CONTRACT TESTING: For Svelte components with @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY tags, create comprehensive UX tests.
|
||||||
|
4. NO DELETION: Never delete existing tests - only update if they fail due to legitimate bugs.
|
||||||
|
5. NO DUPLICATION: Check existing tests in `__tests__/` before creating new ones. Reuse existing test patterns.
|
||||||
|
6. DOCUMENTATION: Create test reports in `specs/<feature>/tests/reports/YYYY-MM-DD-report.md`.
|
||||||
|
7. COVERAGE: Aim for maximum coverage but prioritize CRITICAL and STANDARD tier modules.
|
||||||
|
8. RUN TESTS: Execute tests using `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`.
|
||||||
- slug: semantic
|
- slug: semantic
|
||||||
name: Semantic Agent
|
name: Semantic Agent
|
||||||
roleDefinition: |-
|
roleDefinition: |-
|
||||||
|
|||||||
@@ -102,3 +102,14 @@ directories captured above]
|
|||||||
|-----------|------------|-------------------------------------|
|
|-----------|------------|-------------------------------------|
|
||||||
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
||||||
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
||||||
|
|
||||||
|
## Test Data Reference
|
||||||
|
|
||||||
|
> **For CRITICAL tier components, reference test fixtures from spec.md**
|
||||||
|
|
||||||
|
| Component | TIER | Fixture Name | Location |
|
||||||
|
|-----------|------|--------------|----------|
|
||||||
|
| [e.g., DashboardAPI] | CRITICAL | valid_dashboard | spec.md#test-data-fixtures |
|
||||||
|
| [e.g., TaskDrawer] | CRITICAL | task_states | spec.md#test-data-fixtures |
|
||||||
|
|
||||||
|
**Note**: Tester Agent MUST use these fixtures when writing unit tests for CRITICAL modules. See `semantic_protocol.md` for @TEST_DATA syntax.
|
||||||
|
|||||||
@@ -114,3 +114,52 @@
|
|||||||
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
||||||
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
||||||
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Data Fixtures *(recommended for CRITICAL components)*
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Define reference/fixture data for testing CRITICAL tier components.
|
||||||
|
This data will be used by the Tester Agent when writing unit tests.
|
||||||
|
Format: JSON or YAML that matches the component's data structures.
|
||||||
|
-->
|
||||||
|
|
||||||
|
### Fixtures
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Example fixture format
|
||||||
|
fixture_name:
|
||||||
|
description: "Description of this test data"
|
||||||
|
data:
|
||||||
|
# JSON or YAML data structure
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example: Dashboard API
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
valid_dashboard:
|
||||||
|
description: "Valid dashboard object for API responses"
|
||||||
|
data:
|
||||||
|
id: 1
|
||||||
|
title: "Sales Report"
|
||||||
|
slug: "sales"
|
||||||
|
git_status:
|
||||||
|
branch: "main"
|
||||||
|
sync_status: "OK"
|
||||||
|
last_task:
|
||||||
|
task_id: "task-123"
|
||||||
|
status: "SUCCESS"
|
||||||
|
|
||||||
|
empty_dashboards:
|
||||||
|
description: "Empty dashboard list response"
|
||||||
|
data:
|
||||||
|
dashboards: []
|
||||||
|
total: 0
|
||||||
|
page: 1
|
||||||
|
|
||||||
|
error_not_found:
|
||||||
|
description: "404 error response"
|
||||||
|
data:
|
||||||
|
detail: "Dashboard not found"
|
||||||
|
```
|
||||||
|
|||||||
152
.specify/templates/test-docs-template.md
Normal file
152
.specify/templates/test-docs-template.md
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
description: "Test documentation template for feature implementation"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Test Documentation: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Feature**: [Link to spec.md]
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Updated**: [DATE]
|
||||||
|
**Tester**: [Agent/User Name]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
[Brief description of what this feature does and why testing is important]
|
||||||
|
|
||||||
|
**Test Strategy**:
|
||||||
|
- [ ] Unit Tests (co-located in `__tests__/` directories)
|
||||||
|
- [ ] Integration Tests (if needed)
|
||||||
|
- [ ] E2E Tests (if critical user flows)
|
||||||
|
- [ ] Contract Tests (for API endpoints)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Coverage Matrix
|
||||||
|
|
||||||
|
| Module | File | Unit Tests | Coverage % | Status |
|
||||||
|
|--------|------|------------|------------|--------|
|
||||||
|
| [Module Name] | `path/to/file.py` | [x] | [XX%] | [Pass/Fail] |
|
||||||
|
| [Module Name] | `path/to/file.svelte` | [x] | [XX%] | [Pass/Fail] |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Cases
|
||||||
|
|
||||||
|
### [Module Name]
|
||||||
|
|
||||||
|
**Target File**: `path/to/module.py`
|
||||||
|
|
||||||
|
| ID | Test Case | Type | Expected Result | Status |
|
||||||
|
|----|-----------|------|------------------|--------|
|
||||||
|
| TC001 | [Description] | [Unit/Integration] | [Expected] | [Pass/Fail] |
|
||||||
|
| TC002 | [Description] | [Unit/Integration] | [Expected] | [Pass/Fail] |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Execution Reports
|
||||||
|
|
||||||
|
### Report [YYYY-MM-DD]
|
||||||
|
|
||||||
|
**Executed by**: [Tester]
|
||||||
|
**Duration**: [X] minutes
|
||||||
|
**Result**: [Pass/Fail]
|
||||||
|
|
||||||
|
**Summary**:
|
||||||
|
- Total Tests: [X]
|
||||||
|
- Passed: [X]
|
||||||
|
- Failed: [X]
|
||||||
|
- Skipped: [X]
|
||||||
|
|
||||||
|
**Failed Tests**:
|
||||||
|
| Test | Error | Resolution |
|
||||||
|
|------|-------|-------------|
|
||||||
|
| [Test Name] | [Error Message] | [How Fixed] |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Anti-Patterns & Rules
|
||||||
|
|
||||||
|
### ✅ DO
|
||||||
|
|
||||||
|
1. Write tests BEFORE implementation (TDD approach)
|
||||||
|
2. Use co-location: `src/module/__tests__/test_module.py`
|
||||||
|
3. Use MagicMock for external dependencies (DB, Auth, APIs)
|
||||||
|
4. Include semantic annotations: `# @RELATION: VERIFIES -> module.name`
|
||||||
|
5. Test edge cases and error conditions
|
||||||
|
6. **Test UX states** for Svelte components (@UX_STATE, @UX_FEEDBACK, @UX_RECOVERY)
|
||||||
|
|
||||||
|
### ❌ DON'T
|
||||||
|
|
||||||
|
1. Delete existing tests (only update if they fail)
|
||||||
|
2. Duplicate tests - check for existing tests first
|
||||||
|
3. Test implementation details, not behavior
|
||||||
|
4. Use real external services in unit tests
|
||||||
|
5. Skip error handling tests
|
||||||
|
6. **Skip UX contract tests** for CRITICAL frontend components
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## UX Contract Testing (Frontend)
|
||||||
|
|
||||||
|
### UX States Coverage
|
||||||
|
|
||||||
|
| Component | @UX_STATE | @UX_FEEDBACK | @UX_RECOVERY | Tests |
|
||||||
|
|-----------|-----------|--------------|--------------|-------|
|
||||||
|
| [Component] | [states] | [feedback] | [recovery] | [status] |
|
||||||
|
|
||||||
|
### UX Test Cases
|
||||||
|
|
||||||
|
| ID | Component | UX Tag | Test Action | Expected Result | Status |
|
||||||
|
|----|-----------|--------|-------------|-----------------|--------|
|
||||||
|
| UX001 | [Component] | @UX_STATE: Idle | [action] | [expected] | [Pass/Fail] |
|
||||||
|
| UX002 | [Component] | @UX_FEEDBACK | [action] | [expected] | [Pass/Fail] |
|
||||||
|
| UX003 | [Component] | @UX_RECOVERY | [action] | [expected] | [Pass/Fail] |
|
||||||
|
|
||||||
|
### UX Test Examples
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Testing @UX_STATE transition
|
||||||
|
it('should transition from Idle to Loading on submit', async () => {
|
||||||
|
render(FormComponent);
|
||||||
|
await fireEvent.click(screen.getByText('Submit'));
|
||||||
|
expect(screen.getByTestId('form')).toHaveClass('loading');
|
||||||
|
});
|
||||||
|
|
||||||
|
// Testing @UX_FEEDBACK
|
||||||
|
it('should show error toast on validation failure', async () => {
|
||||||
|
render(FormComponent);
|
||||||
|
await fireEvent.click(screen.getByText('Submit'));
|
||||||
|
expect(screen.getByRole('alert')).toHaveTextContent('Validation error');
|
||||||
|
});
|
||||||
|
|
||||||
|
// Testing @UX_RECOVERY
|
||||||
|
it('should allow retry after error', async () => {
|
||||||
|
render(FormComponent);
|
||||||
|
// Trigger error state
|
||||||
|
await fireEvent.click(screen.getByText('Submit'));
|
||||||
|
// Click retry
|
||||||
|
await fireEvent.click(screen.getByText('Retry'));
|
||||||
|
expect(screen.getByTestId('form')).not.toHaveClass('error');
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- [Additional notes about testing approach]
|
||||||
|
- [Known issues or limitations]
|
||||||
|
- [Recommendations for future testing]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Related Documents
|
||||||
|
|
||||||
|
- [spec.md](./spec.md)
|
||||||
|
- [plan.md](./plan.md)
|
||||||
|
- [tasks.md](./tasks.md)
|
||||||
|
- [contracts/](./contracts/)
|
||||||
@@ -54,16 +54,30 @@
|
|||||||
**@UX_FEEDBACK:** Реакция системы (Toast, Shake, Red Border).
|
**@UX_FEEDBACK:** Реакция системы (Toast, Shake, Red Border).
|
||||||
**@UX_RECOVERY:** Механизм исправления ошибки пользователем (Retry, Clear Input).
|
**@UX_RECOVERY:** Механизм исправления ошибки пользователем (Retry, Clear Input).
|
||||||
|
|
||||||
|
**UX Testing Tags (для Tester Agent):**
|
||||||
|
**@UX_TEST:** Спецификация теста для UX состояния.
|
||||||
|
Формат: `@UX_TEST: [state] -> {action, expected}`
|
||||||
|
Пример: `@UX_TEST: Idle -> {click: toggle, expected: isExpanded=true}`
|
||||||
|
|
||||||
Правило: Не используй `assert` в коде, используй `if/raise` или `guards`.
|
Правило: Не используй `assert` в коде, используй `if/raise` или `guards`.
|
||||||
|
|
||||||
#### V. АДАПТАЦИЯ (TIERS)
|
#### V. АДАПТАЦИЯ (TIERS)
|
||||||
Определяется тегом `@TIER` в Header.
|
Определяется тегом `@TIER` в Header.
|
||||||
|
|
||||||
1. **CRITICAL** (Core/Security/**Complex UI**):
|
1. **CRITICAL** (Core/Security/**Complex UI**):
|
||||||
- Требование: Полный контракт (включая **все @UX теги**), Граф, Инварианты, Строгие Логи.
|
- Требование: Полный контракт (включая **все @UX теги**), Граф, Инварианты, Строгие Логи.
|
||||||
2. **STANDARD** (BizLogic/**Forms**):
|
- **@TEST_DATA**: Обязательные эталонные данные для тестирования. Формат:
|
||||||
|
```
|
||||||
|
@TEST_DATA: fixture_name -> {JSON_PATH} | {INLINE_DATA}
|
||||||
|
```
|
||||||
|
Примеры:
|
||||||
|
- `@TEST_DATA: valid_user -> {./fixtures/users.json#valid}`
|
||||||
|
- `@TEST_DATA: empty_state -> {"dashboards": [], "total": 0}`
|
||||||
|
- Tester Agent **ОБЯЗАН** использовать @TEST_DATA при написании тестов для CRITICAL модулей.
|
||||||
|
2. **STANDARD** (BizLogic/**Forms**):
|
||||||
- Требование: Базовый контракт (@PURPOSE, @UX_STATE), Логи, @RELATION.
|
- Требование: Базовый контракт (@PURPOSE, @UX_STATE), Логи, @RELATION.
|
||||||
3. **TRIVIAL** (DTO/**Atoms**):
|
- @TEST_DATA: Рекомендуется для Complex Forms.
|
||||||
|
3. **TRIVIAL** (DTO/**Atoms**):
|
||||||
- Требование: Только Якоря [DEF] и @PURPOSE.
|
- Требование: Только Якоря [DEF] и @PURPOSE.
|
||||||
|
|
||||||
#### VI. ЛОГИРОВАНИЕ (BELIEF STATE & TASK LOGS)
|
#### VI. ЛОГИРОВАНИЕ (BELIEF STATE & TASK LOGS)
|
||||||
|
|||||||
Reference in New Issue
Block a user