Compare commits

...

14 Commits

Author SHA1 Message Date
c2a4c8062a fix tax log 2026-02-19 16:05:59 +03:00
2c820e103a tests ready 2026-02-19 13:33:20 +03:00
c8b84b7bd7 Coder + fix workflow 2026-02-19 13:33:10 +03:00
fdb944f123 Test logic update 2026-02-19 12:44:31 +03:00
d29bc511a2 task panel 2026-02-19 09:43:01 +03:00
a3a9f0788d docs: amend constitution to v2.3.0 (tailwind css first principle) 2026-02-18 18:29:52 +03:00
77147dc95b refactor 2026-02-18 17:29:46 +03:00
026239e3bf fix 2026-02-15 11:11:30 +03:00
4a0273a604 измененные спеки таски 2026-02-10 15:53:38 +03:00
edb2dd5263 updated tasks 2026-02-10 15:04:43 +03:00
76b98fcf8f linter + новые таски 2026-02-10 12:53:01 +03:00
794cc55fe7 Таски готовы 2026-02-09 12:35:27 +03:00
235b0e3c9f semantic update 2026-02-08 22:53:54 +03:00
e6087bd3c1 таски готовы 2026-02-07 12:42:32 +03:00
192 changed files with 85230 additions and 64023 deletions

3
.gitignore vendored
View File

@@ -10,8 +10,6 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
@@ -69,3 +67,4 @@ backend/tasks.db
backend/logs
backend/auth.db
semantics/reports
backend/tasks.db

View File

@@ -33,6 +33,8 @@ Auto-generated from all feature plans. Last updated: 2025-12-19
- N/A (UI reorganization and API integration) (015-frontend-nav-redesign)
- SQLite (`auth.db`) for Users, Roles, Permissions, and Mappings. (016-multi-user-auth)
- SQLite (existing `tasks.db` for results, `auth.db` for permissions, `mappings.db` or new `plugins.db` for provider config/metadata) (017-llm-analysis-plugin)
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy, WebSocket (existing) (019-superset-ux-redesign)
- SQLite (tasks.db, auth.db, migrations.db) - no new database tables required (019-superset-ux-redesign)
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
@@ -53,9 +55,9 @@ cd src; pytest; ruff check .
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
## Recent Changes
- 019-superset-ux-redesign: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy, WebSocket (existing)
- 017-llm-analysis-plugin: Added Python 3.9+ (Backend), Node.js 18+ (Frontend)
- 016-multi-user-auth: Added Python 3.9+ (Backend), Node.js 18+ (Frontend)
- 015-frontend-nav-redesign: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI (Backend), SvelteKit + Tailwind CSS (Frontend)
<!-- MANUAL ADDITIONS START -->

View File

@@ -0,0 +1,199 @@
---
description: Fix failing tests and implementation issues based on test reports
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Analyze test failure reports, identify root causes, and fix implementation issues while preserving semantic protocol compliance.
## Operating Constraints
1. **USE CODER MODE**: Always switch to `coder` mode for code fixes
2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic.
3. **TEST DATA**: If tests use @TEST_DATA fixtures, preserve them when fixing
4. **NO DELETION**: Never delete existing tests or semantic annotations
5. **REPORT FIRST**: Always write a fix report before making changes
## Execution Steps
### 1. Load Test Report
**Required**: Test report file path (e.g., `specs/<feature>/tests/reports/2026-02-19-report.md`)
**Parse the report for**:
- Failed test cases
- Error messages
- Stack traces
- Expected vs actual behavior
- Affected modules/files
### 2. Analyze Root Causes
For each failed test:
1. **Read the test file** to understand what it's testing
2. **Read the implementation file** to find the bug
3. **Check semantic protocol compliance**:
- Does the implementation have correct [DEF] anchors?
- Are @TAGS (@PRE, @POST, @UX_STATE, etc.) present?
- Does the code match the TIER requirements?
4. **Identify the fix**:
- Logic error in implementation
- Missing error handling
- Incorrect API usage
- State management issue
### 3. Write Fix Report
Create a structured fix report:
```markdown
# Fix Report: [FEATURE]
**Date**: [YYYY-MM-DD]
**Report**: [Test Report Path]
**Fixer**: Coder Agent
## Summary
- Total Failed Tests: [X]
- Total Fixed: [X]
- Total Skipped: [X]
## Failed Tests Analysis
### Test: [Test Name]
**File**: `path/to/test.py`
**Error**: [Error message]
**Root Cause**: [Explanation of why test failed]
**Fix Required**: [Description of fix]
**Status**: [Pending/In Progress/Completed]
## Fixes Applied
### Fix 1: [Description]
**Affected File**: `path/to/file.py`
**Test Affected**: `[Test Name]`
**Changes**:
```diff
<<<<<<< SEARCH
[Original Code]
=======
[Fixed Code]
>>>>>>> REPLACE
```
**Verification**: [How to verify fix works]
**Semantic Integrity**: [Confirmed annotations preserved]
## Next Steps
- [ ] Run tests to verify fix: `cd backend && .venv/bin/python3 -m pytest`
- [ ] Check for related failing tests
- [ ] Update test documentation if needed
```
### 4. Apply Fixes (in Coder Mode)
Switch to `coder` mode and apply fixes:
1. **Read the implementation file** to get exact content
2. **Apply the fix** using apply_diff
3. **Preserve all semantic annotations**:
- Keep [DEF:...] and [/DEF:...] anchors
- Keep all @TAGS (@PURPOSE, @LAYER, @TIER, @RELATION, @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY)
4. **Only update code logic** to fix the bug
5. **Run tests** to verify the fix
### 5. Verification
After applying fixes:
1. **Run tests**:
```bash
cd backend && .venv/bin/python3 -m pytest -v
```
or
```bash
cd frontend && npm run test
```
2. **Check test results**:
- Failed tests should now pass
- No new tests should fail
- Coverage should not decrease
3. **Update fix report** with results:
- Mark fixes as completed
- Add verification steps
- Note any remaining issues
## Output
Generate final fix report:
```markdown
# Fix Report: [FEATURE] - COMPLETED
**Date**: [YYYY-MM-DD]
**Report**: [Test Report Path]
**Fixer**: Coder Agent
## Summary
- Total Failed Tests: [X]
- Total Fixed: [X] ✅
- Total Skipped: [X]
## Fixes Applied
### Fix 1: [Description] ✅
**Affected File**: `path/to/file.py`
**Test Affected**: `[Test Name]`
**Changes**: [Summary of changes]
**Verification**: All tests pass ✅
**Semantic Integrity**: Preserved ✅
## Test Results
```
[Full test output showing all passing tests]
```
## Recommendations
- [ ] Monitor for similar issues
- [ ] Update documentation if needed
- [ ] Consider adding more tests for edge cases
## Related Files
- Test Report: [path]
- Implementation: [path]
- Test File: [path]
```
## Context for Fixing
$ARGUMENTS

View File

@@ -117,7 +117,8 @@ You **MUST** consider the user input before proceeding (if not empty).
- **Validation checkpoints**: Verify each phase completion before proceeding
7. Implementation execution rules:
- **Strict Adherence**: Apply `semantic_protocol.md` rules - every file must start with [DEF] header, include @TIER, and define contracts
- **Strict Adherence**: Apply `semantic_protocol.md` rules - every file must start with [DEF] header, include @TIER, and define contracts.
- **CRITICAL Contracts**: If a task description contains a contract summary (e.g., `CRITICAL: PRE: ..., POST: ...`), these constraints are **MANDATORY** and must be strictly implemented in the code using guards/assertions (if applicable per protocol).
- **Setup first**: Initialize project structure, dependencies, configuration
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
- **Core development**: Implement models, services, CLI commands, endpoints

View File

@@ -66,25 +66,30 @@ You **MUST** consider the user input before proceeding (if not empty).
0. **Validate Design against UX Reference**:
- Check if the proposed architecture supports the latency, interactivity, and flow defined in `ux_reference.md`.
- **CRITICAL**: If the technical plan requires compromising the UX defined in `ux_reference.md` (e.g. "We can't do real-time validation because X"), you **MUST STOP** and warn the user. Do not proceed until resolved.
- **Linkage**: Ensure key UI states from `ux_reference.md` map to Component Contracts (`@UX_STATE`).
- **CRITICAL**: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you **MUST STOP** and warn the user.
1. **Extract entities from feature spec** → `data-model.md`:
- Entity name, fields, relationships
- Validation rules from requirements
- State transitions if applicable
- Entity name, fields, relationships, validation rules.
2. **Define Module & Function Contracts (Semantic Protocol)**:
- **MANDATORY**: For every new module, define the [DEF] Header and Module-level Contract (@TIER, @PURPOSE, @INVARIANT) as per `semantic_protocol.md`.
- **REQUIRED**: Define Function Contracts (@PRE, @POST) for critical logic.
- Output specific contract definitions to `contracts/modules.md` or append to `data-model.md` to guide implementation.
- Ensure strict adherence to `semantic_protocol.md` syntax.
2. **Design & Verify Contracts (Semantic Protocol)**:
- **Drafting**: Define [DEF] Headers and Contracts for all new modules based on `semantic_protocol.md`.
- **TIER Classification**: Explicitly assign `@TIER: [CRITICAL|STANDARD|TRIVIAL]` to each module.
- **CRITICAL Requirements**: For all CRITICAL modules, define full `@PRE`, `@POST`, and (if UI) `@UX_STATE` contracts.
- **Self-Review**:
- *Completeness*: Do `@PRE`/`@POST` cover edge cases identified in Research?
- *Connectivity*: Do `@RELATION` tags form a coherent graph?
- *Compliance*: Does syntax match `[DEF:id:Type]` exactly?
- **Output**: Write verified contracts to `contracts/modules.md`.
3. **Generate API contracts** from functional requirements:
- For each user action → endpoint
- Use standard REST/GraphQL patterns
- Output OpenAPI/GraphQL schema to `/contracts/`
3. **Simulate Contract Usage**:
- Trace one key user scenario through the defined contracts to ensure data flow continuity.
- If a contract interface mismatch is found, fix it immediately.
3. **Agent context update**:
4. **Generate API contracts**:
- Output OpenAPI/GraphQL schema to `/contracts/` for backend-frontend sync.
5. **Agent context update**:
- Run `.specify/scripts/bash/update-agent-context.sh kilocode`
- These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file

View File

@@ -119,7 +119,10 @@ Every task MUST strictly follow this format:
- If tests requested: Tests specific to that story
- Mark story dependencies (most stories should be independent)
2. **From Contracts**:
2. **From Contracts (CRITICAL TIER)**:
- Identify components marked as `@TIER: CRITICAL` in `contracts/modules.md`.
- For these components, **MUST** append the summary of `@PRE`, `@POST`, and `@UX_STATE` contracts directly to the task description.
- Example: `- [ ] T005 [P] [US1] Implement Auth (CRITICAL: PRE: token exists, POST: returns User) in src/auth.py`
- Map each contract/endpoint → to the user story it serves
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase

View File

@@ -1,10 +1,7 @@
---
description: Run semantic validation and functional tests for a specific feature, module, or file.
handoffs:
- label: Fix Implementation
agent: speckit.implement
prompt: Fix the issues found during testing...
send: true
description: Generate tests, manage test documentation, and ensure maximum code coverage
---
## User Input
@@ -13,54 +10,169 @@ handoffs:
$ARGUMENTS
```
**Input format:** Can be a file path, a directory, or a feature name.
You **MUST** consider the user input before proceeding (if not empty).
## Outline
## Goal
1. **Context Analysis**:
- Determine the target scope (Backend vs Frontend vs Full Feature).
- Read `semantic_protocol.md` to load validation rules.
Execute full testing cycle: analyze code for testable modules, write tests with proper coverage, maintain test documentation, and ensure no test duplication or deletion.
2. **Phase 1: Semantic Static Analysis (The "Compiler" Check)**
- **Command:** Use `grep` or script to verify Protocol compliance before running code.
- **Check:**
- Does the file start with `[DEF:...]` header?
- Are `@TIER` and `@PURPOSE` defined?
- Are imports located *after* the contracts?
- Do functions marked "Critical" have `@PRE`/`@POST` tags?
- **Action:** If this phase fails, **STOP** and report "Semantic Compilation Failed". Do not run runtime tests.
## Operating Constraints
3. **Phase 2: Environment Prep**
- Detect project type:
- **Python**: Check if `.venv` is active.
- **Svelte**: Check if `node_modules` exists.
- **Command:** Run linter (e.g., `ruff check`, `eslint`) to catch syntax errors immediately.
1. **NEVER delete existing tests** - Only update if they fail due to bugs in the test or implementation
2. **NEVER duplicate tests** - Check existing tests first before creating new ones
3. **Use TEST_DATA fixtures** - For CRITICAL tier modules, read @TEST_DATA from semantic_protocol.md
4. **Co-location required** - Write tests in `__tests__` directories relative to the code being tested
4. **Phase 3: Test Execution (Runtime)**
- Select the test runner based on the file path:
- **Backend (`*.py`)**:
- Command: `pytest <path_to_test_file> -v`
- If no specific test file exists, try to find it by convention: `tests/test_<module_name>.py`.
- **Frontend (`*.svelte`, `*.ts`)**:
- Command: `npm run test -- <path_to_component>`
- **Verification**:
- Analyze output logs.
- If tests fail, summarize the failure (AssertionError, Timeout, etc.).
## Execution Steps
5. **Phase 4: Contract Coverage Check (Manual/LLM verify)**
- Review the test cases executed.
- **Question**: Do the tests explicitly verify the `@POST` guarantees defined in the module header?
- **Report**: Mark as "Weak Coverage" if contracts exist but aren't tested.
### 1. Analyze Context
## Execution Rules
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS.
- **Fail Fast**: If semantic headers are missing, don't waste time running pytest.
- **No Silent Failures**: Always output the full error log if a command fails.
- **Auto-Correction Hint**: If a test fails, suggest the specific `speckit.implement` command to fix it.
Determine:
- FEATURE_DIR - where the feature is located
- TASKS_FILE - path to tasks.md
- Which modules need testing based on task status
## Example Commands
### 2. Load Relevant Artifacts
- **Python**: `pytest backend/tests/test_auth.py`
- **Svelte**: `npm run test:unit -- src/components/Button.svelte`
- **Lint**: `ruff check backend/src/api/`
**From tasks.md:**
- Identify completed implementation tasks (not test tasks)
- Extract file paths that need tests
**From semantic_protocol.md:**
- Read @TIER annotations for modules
- For CRITICAL modules: Read @TEST_DATA fixtures
**From existing tests:**
- Scan `__tests__` directories for existing tests
- Identify test patterns and coverage gaps
### 3. Test Coverage Analysis
Create coverage matrix:
| Module | File | Has Tests | TIER | TEST_DATA Available |
|--------|------|-----------|------|-------------------|
| ... | ... | ... | ... | ... |
### 4. Write Tests (TDD Approach)
For each module requiring tests:
1. **Check existing tests**: Scan `__tests__/` for duplicates
2. **Read TEST_DATA**: If CRITICAL tier, read @TEST_DATA from semantic_protocol.md
3. **Write test**: Follow co-location strategy
- Python: `src/module/__tests__/test_module.py`
- Svelte: `src/lib/components/__tests__/test_component.test.js`
4. **Use mocks**: Use `unittest.mock.MagicMock` for external dependencies
### 4a. UX Contract Testing (Frontend Components)
For Svelte components with `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tags:
1. **Parse UX tags**: Read component file and extract all `@UX_*` annotations
2. **Generate UX tests**: Create tests for each UX state transition
```javascript
// Example: Testing @UX_STATE: Idle -> Expanded
it('should transition from Idle to Expanded on toggle click', async () => {
render(Sidebar);
const toggleBtn = screen.getByRole('button', { name: /toggle/i });
await fireEvent.click(toggleBtn);
expect(screen.getByTestId('sidebar')).toHaveClass('expanded');
});
```
3. **Test @UX_FEEDBACK**: Verify visual feedback (toast, shake, color changes)
4. **Test @UX_RECOVERY**: Verify error recovery mechanisms (retry, clear input)
5. **Use @UX_TEST fixtures**: If component has `@UX_TEST` tags, use them as test specifications
**UX Test Template:**
```javascript
// [DEF:__tests__/test_Component:Module]
// @RELATION: VERIFIES -> ../Component.svelte
// @PURPOSE: Test UX states and transitions
describe('Component UX States', () => {
// @UX_STATE: Idle -> {action: click, expected: Active}
it('should transition Idle -> Active on click', async () => { ... });
// @UX_FEEDBACK: Toast on success
it('should show toast on successful action', async () => { ... });
// @UX_RECOVERY: Retry on error
it('should allow retry on error', async () => { ... });
});
```
### 5. Test Documentation
Create/update documentation in `specs/<feature>/tests/`:
```
tests/
├── README.md # Test strategy and overview
├── coverage.md # Coverage matrix and reports
└── reports/
└── YYYY-MM-DD-report.md
```
### 6. Execute Tests
Run tests and report results:
**Backend:**
```bash
cd backend && .venv/bin/python3 -m pytest -v
```
**Frontend:**
```bash
cd frontend && npm run test
```
### 7. Update Tasks
Mark test tasks as completed in tasks.md with:
- Test file path
- Coverage achieved
- Any issues found
## Output
Generate test execution report:
```markdown
# Test Report: [FEATURE]
**Date**: [YYYY-MM-DD]
**Executed by**: Tester Agent
## Coverage Summary
| Module | Tests | Coverage % |
|--------|-------|------------|
| ... | ... | ... |
## Test Results
- Total: [X]
- Passed: [X]
- Failed: [X]
- Skipped: [X]
## Issues Found
| Test | Error | Resolution |
|------|-------|------------|
| ... | ... | ... |
## Next Steps
- [ ] Fix failed tests
- [ ] Add more coverage for [module]
- [ ] Review TEST_DATA fixtures
```
## Context for Testing
$ARGUMENTS

View File

@@ -1,18 +1,31 @@
customModes:
- slug: tester
name: Tester
description: QA and Plan Verification Specialist
description: QA and Test Engineer - Full Testing Cycle
roleDefinition: |-
You are Kilo Code, acting as a QA and Verification Specialist. Your primary goal is to validate that the project implementation aligns strictly with the defined specifications and task plans.
Your responsibilities include: - Reading and analyzing task plans and specifications (typically in the `specs/` directory). - Verifying that implemented code matches the requirements. - Executing tests and validating system behavior via CLI or Browser. - Updating the status of tasks in the plan files (e.g., marking checkboxes [x]) as they are verified. - Identifying and reporting missing features or bugs.
whenToUse: Use this mode when you need to audit the progress of a project, verify completed tasks against the plan, run quality assurance checks, or update the status of task lists in specification documents.
You are Kilo Code, acting as a QA and Test Engineer. Your primary goal is to ensure maximum test coverage, maintain test quality, and preserve existing tests.
Your responsibilities include:
- WRITING TESTS: Create comprehensive unit tests following TDD principles, using co-location strategy (`__tests__` directories).
- TEST DATA: For CRITICAL tier modules, you MUST use @TEST_DATA fixtures defined in semantic_protocol.md. Read and apply them in your tests.
- DOCUMENTATION: Maintain test documentation in `specs/<feature>/tests/` directory with coverage reports and test case specifications.
- VERIFICATION: Run tests, analyze results, and ensure all tests pass.
- PROTECTION: NEVER delete existing tests. NEVER duplicate tests - check for existing tests first.
whenToUse: Use this mode when you need to write tests, run test coverage analysis, or perform quality assurance with full testing cycle.
groups:
- read
- edit
- command
- browser
- mcp
customInstructions: 1. Always begin by loading the relevant plan or task list from the `specs/` directory. 2. Do not assume a task is done just because it is checked; verify the code or functionality first if asked to audit. 3. When updating task lists, ensure you only mark items as complete if you have verified them.
customInstructions: |
1. CO-LOCATION: Write tests in `__tests__` subdirectories relative to the code being tested (Fractal Strategy).
2. TEST DATA MANDATORY: For CRITICAL modules, read @TEST_DATA from semantic_protocol.md and use fixtures in tests.
3. UX CONTRACT TESTING: For Svelte components with @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY tags, create comprehensive UX tests.
4. NO DELETION: Never delete existing tests - only update if they fail due to legitimate bugs.
5. NO DUPLICATION: Check existing tests in `__tests__/` before creating new ones. Reuse existing test patterns.
6. DOCUMENTATION: Create test reports in `specs/<feature>/tests/reports/YYYY-MM-DD-report.md`.
7. COVERAGE: Aim for maximum coverage but prioritize CRITICAL and STANDARD tier modules.
8. RUN TESTS: Execute tests using `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`.
- slug: semantic
name: Semantic Agent
roleDefinition: |-
@@ -33,7 +46,7 @@ customModes:
name: Product Manager
roleDefinition: |-
Your purpose is to rigorously execute the workflows defined in `.kilocode/workflows/`.
You act as the orchestrator for: - Specification (`speckit.specify`, `speckit.clarify`) - Planning (`speckit.plan`) - Task Management (`speckit.tasks`, `speckit.taskstoissues`) - Quality Assurance (`speckit.analyze`, `speckit.checklist`) - Governance (`speckit.constitution`) - Implementation Oversight (`speckit.implement`)
You act as the orchestrator for: - Specification (`speckit.specify`, `speckit.clarify`) - Planning (`speckit.plan`) - Task Management (`speckit.tasks`, `speckit.taskstoissues`) - Quality Assurance (`speckit.analyze`, `speckit.checklist`, `speckit.test`, `speckit.fix`) - Governance (`speckit.constitution`) - Implementation Oversight (`speckit.implement`)
For each task, you must read the relevant workflow file from `.kilocode/workflows/` and follow its Execution Steps precisely.
whenToUse: Use this mode when you need to run any /speckit.* command or when dealing with high-level feature planning, specification writing, or project management tasks.
description: Executes SpecKit workflows for feature management
@@ -44,3 +57,26 @@ customModes:
- command
- mcp
source: project
- slug: coder
name: Coder
roleDefinition: You are Kilo Code, acting as an Implementation Specialist. Your primary goal is to write code that strictly follows the Semantic Protocol defined in `semantic_protocol.md`.
whenToUse: Use this mode when you need to implement features, write code, or fix issues based on test reports.
description: Implementation Specialist - Semantic Protocol Compliant
customInstructions: |
1. SEMANTIC PROTOCOL: ALWAYS use semantic_protocol.md as your single source of truth.
2. ANCHOR FORMAT: Use #[DEF:filename:Type] at start and #[/DEF:filename] at end.
3. TAGS: Add @PURPOSE, @LAYER, @TIER, @RELATION, @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY.
4. TIER COMPLIANCE:
- CRITICAL: Full contract + all UX tags + strict logging
- STANDARD: Basic contract + UX tags where applicable
- TRIVIAL: Only anchors + @PURPOSE
5. CODE SIZE: Keep modules under 300 lines. Refactor if exceeding.
6. ERROR HANDLING: Use if/raise or guards, never assert.
7. TEST FIXES: When fixing failing tests, preserve semantic annotations. Only update code logic.
8. RUN TESTS: After fixes, run tests to verify: `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`.
groups:
- read
- edit
- command
- mcp
source: project

View File

@@ -1,9 +1,9 @@
<!--
SYNC IMPACT REPORT
Version: 2.2.0 (ConfigManager Discipline)
Version: 2.3.0 (Tailwind CSS & Scoped CSS Minimization)
Changes:
- Updated Principle II: Added mandatory requirement for using `ConfigManager` (via dependency injection) for all configuration access to ensure consistent environment handling and avoid hardcoded values.
- Updated Principle III: Refined `requestApi` requirement.
- Updated Principle III: Added mandatory requirement for Tailwind CSS usage and minimization of scoped `<style>` blocks in Svelte components.
- Version bump: 2.2.0 -> 2.3.0 (Minor: New material guidance added).
Templates Status:
- .specify/templates/plan-template.md: ✅ Aligned.
- .specify/templates/spec-template.md: ✅ Aligned.
@@ -27,6 +27,7 @@ All functional extensions, tools, or major features must be implemented as modul
### III. Unified Frontend Experience
To ensure a consistent and accessible user experience, all frontend implementations must strictly adhere to the unified design and localization standards.
- **Component Reusability**: All UI elements MUST utilize the standardized Svelte component library (`src/lib/ui`) and centralized design tokens.
- **Tailwind CSS First**: All styling MUST be implemented using Tailwind CSS utility classes. The use of scoped `<style>` blocks in Svelte components MUST be minimized and reserved only for complex animations or third-party overrides that cannot be achieved via Tailwind.
- **Internationalization (i18n)**: All user-facing text MUST be extracted to the translation system (`src/lib/i18n`).
- **Backend Communication**: All API requests MUST use the `requestApi` wrapper (or its derivatives like `fetchApi`, `postApi`) from `src/lib/api.js`. Direct use of the native `fetch` API for backend communication is FORBIDDEN to ensure consistent authentication (JWT) and error handling.
@@ -52,4 +53,4 @@ This Constitution establishes the "Semantic Code Generation Protocol" as the sup
- **Amendments**: Changes to core principles require a Constitution amendment. Changes to technical syntax require a Protocol update.
- **Compliance**: Failure to adhere to the Protocol constitutes a build failure.
**Version**: 2.2.0 | **Ratified**: 2025-12-19 | **Last Amended**: 2026-01-29
**Version**: 2.3.0 | **Ratified**: 2025-12-19 | **Last Amended**: 2026-02-18

View File

@@ -17,8 +17,8 @@
the iteration process.
-->
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, Tailwind CSS, SvelteKit or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
@@ -102,3 +102,14 @@ directories captured above]
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
## Test Data Reference
> **For CRITICAL tier components, reference test fixtures from spec.md**
| Component | TIER | Fixture Name | Location |
|-----------|------|--------------|----------|
| [e.g., DashboardAPI] | CRITICAL | valid_dashboard | spec.md#test-data-fixtures |
| [e.g., TaskDrawer] | CRITICAL | task_states | spec.md#test-data-fixtures |
**Note**: Tester Agent MUST use these fixtures when writing unit tests for CRITICAL modules. See `semantic_protocol.md` for @TEST_DATA syntax.

View File

@@ -114,3 +114,52 @@
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
---
## Test Data Fixtures *(recommended for CRITICAL components)*
<!--
Define reference/fixture data for testing CRITICAL tier components.
This data will be used by the Tester Agent when writing unit tests.
Format: JSON or YAML that matches the component's data structures.
-->
### Fixtures
```yaml
# Example fixture format
fixture_name:
description: "Description of this test data"
data:
# JSON or YAML data structure
```
### Example: Dashboard API
```yaml
valid_dashboard:
description: "Valid dashboard object for API responses"
data:
id: 1
title: "Sales Report"
slug: "sales"
git_status:
branch: "main"
sync_status: "OK"
last_task:
task_id: "task-123"
status: "SUCCESS"
empty_dashboards:
description: "Empty dashboard list response"
data:
dashboards: []
total: 0
page: 1
error_not_found:
description: "404 error response"
data:
detail: "Dashboard not found"
```

View File

@@ -93,7 +93,8 @@ Examples of foundational tasks (adjust based on your project):
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Add logging for user story 1 operations
- [ ] T017 [US1] [P] Implement UI using Tailwind CSS (minimize scoped styles)
- [ ] T018 [US1] Add logging for user story 1 operations
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently

View File

@@ -0,0 +1,152 @@
---
description: "Test documentation template for feature implementation"
---
# Test Documentation: [FEATURE NAME]
**Feature**: [Link to spec.md]
**Created**: [DATE]
**Updated**: [DATE]
**Tester**: [Agent/User Name]
---
## Overview
[Brief description of what this feature does and why testing is important]
**Test Strategy**:
- [ ] Unit Tests (co-located in `__tests__/` directories)
- [ ] Integration Tests (if needed)
- [ ] E2E Tests (if critical user flows)
- [ ] Contract Tests (for API endpoints)
---
## Test Coverage Matrix
| Module | File | Unit Tests | Coverage % | Status |
|--------|------|------------|------------|--------|
| [Module Name] | `path/to/file.py` | [x] | [XX%] | [Pass/Fail] |
| [Module Name] | `path/to/file.svelte` | [x] | [XX%] | [Pass/Fail] |
---
## Test Cases
### [Module Name]
**Target File**: `path/to/module.py`
| ID | Test Case | Type | Expected Result | Status |
|----|-----------|------|------------------|--------|
| TC001 | [Description] | [Unit/Integration] | [Expected] | [Pass/Fail] |
| TC002 | [Description] | [Unit/Integration] | [Expected] | [Pass/Fail] |
---
## Test Execution Reports
### Report [YYYY-MM-DD]
**Executed by**: [Tester]
**Duration**: [X] minutes
**Result**: [Pass/Fail]
**Summary**:
- Total Tests: [X]
- Passed: [X]
- Failed: [X]
- Skipped: [X]
**Failed Tests**:
| Test | Error | Resolution |
|------|-------|-------------|
| [Test Name] | [Error Message] | [How Fixed] |
---
## Anti-Patterns & Rules
### ✅ DO
1. Write tests BEFORE implementation (TDD approach)
2. Use co-location: `src/module/__tests__/test_module.py`
3. Use MagicMock for external dependencies (DB, Auth, APIs)
4. Include semantic annotations: `# @RELATION: VERIFIES -> module.name`
5. Test edge cases and error conditions
6. **Test UX states** for Svelte components (@UX_STATE, @UX_FEEDBACK, @UX_RECOVERY)
### ❌ DON'T
1. Delete existing tests (only update if they fail)
2. Duplicate tests - check for existing tests first
3. Test implementation details, not behavior
4. Use real external services in unit tests
5. Skip error handling tests
6. **Skip UX contract tests** for CRITICAL frontend components
---
## UX Contract Testing (Frontend)
### UX States Coverage
| Component | @UX_STATE | @UX_FEEDBACK | @UX_RECOVERY | Tests |
|-----------|-----------|--------------|--------------|-------|
| [Component] | [states] | [feedback] | [recovery] | [status] |
### UX Test Cases
| ID | Component | UX Tag | Test Action | Expected Result | Status |
|----|-----------|--------|-------------|-----------------|--------|
| UX001 | [Component] | @UX_STATE: Idle | [action] | [expected] | [Pass/Fail] |
| UX002 | [Component] | @UX_FEEDBACK | [action] | [expected] | [Pass/Fail] |
| UX003 | [Component] | @UX_RECOVERY | [action] | [expected] | [Pass/Fail] |
### UX Test Examples
```javascript
// Testing @UX_STATE transition
it('should transition from Idle to Loading on submit', async () => {
render(FormComponent);
await fireEvent.click(screen.getByText('Submit'));
expect(screen.getByTestId('form')).toHaveClass('loading');
});
// Testing @UX_FEEDBACK
it('should show error toast on validation failure', async () => {
render(FormComponent);
await fireEvent.click(screen.getByText('Submit'));
expect(screen.getByRole('alert')).toHaveTextContent('Validation error');
});
// Testing @UX_RECOVERY
it('should allow retry after error', async () => {
render(FormComponent);
// Trigger error state
await fireEvent.click(screen.getByText('Submit'));
// Click retry
await fireEvent.click(screen.getByText('Retry'));
expect(screen.getByTestId('form')).not.toHaveClass('error');
});
```
---
## Notes
- [Additional notes about testing approach]
- [Known issues or limitations]
- [Recommendations for future testing]
---
## Related Documents
- [spec.md](./spec.md)
- [plan.md](./plan.md)
- [tasks.md](./tasks.md)
- [contracts/](./contracts/)

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -1 +1,10 @@
from . import plugins, tasks, settings, connections, environments, mappings, migration, git, storage, admin
# Lazy loading of route modules to avoid import issues in tests
# This allows tests to import routes without triggering all module imports
__all__ = ['plugins', 'tasks', 'settings', 'connections', 'environments', 'mappings', 'migration', 'git', 'storage', 'admin']
def __getattr__(name):
if name in __all__:
import importlib
return importlib.import_module(f".{name}", __name__)
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")

View File

@@ -0,0 +1,286 @@
# [DEF:backend.src.api.routes.__tests__.test_dashboards:Module]
# @TIER: STANDARD
# @PURPOSE: Unit tests for Dashboards API endpoints
# @LAYER: API
# @RELATION: TESTS -> backend.src.api.routes.dashboards
import pytest
from unittest.mock import MagicMock, patch, AsyncMock
from fastapi.testclient import TestClient
from src.app import app
from src.api.routes.dashboards import DashboardsResponse
client = TestClient(app)
# [DEF:test_get_dashboards_success:Function]
# @TEST: GET /api/dashboards returns 200 and valid schema
# @PRE: env_id exists
# @POST: Response matches DashboardsResponse schema
def test_get_dashboards_success():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.get_resource_service") as mock_service, \
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
# Mock task manager
mock_task_mgr.return_value.get_all_tasks.return_value = []
# Mock resource service response
async def mock_get_dashboards(env, tasks):
return [
{
"id": 1,
"title": "Sales Report",
"slug": "sales",
"git_status": {"branch": "main", "sync_status": "OK"},
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
}
]
mock_service.return_value.get_dashboards_with_status = AsyncMock(
side_effect=mock_get_dashboards
)
# Mock permission
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards?env_id=prod")
assert response.status_code == 200
data = response.json()
assert "dashboards" in data
assert "total" in data
assert "page" in data
# [/DEF:test_get_dashboards_success:Function]
# [DEF:test_get_dashboards_with_search:Function]
# @TEST: GET /api/dashboards filters by search term
# @PRE: search parameter provided
# @POST: Only matching dashboards returned
def test_get_dashboards_with_search():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.get_resource_service") as mock_service, \
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
mock_task_mgr.return_value.get_all_tasks.return_value = []
async def mock_get_dashboards(env, tasks):
return [
{"id": 1, "title": "Sales Report", "slug": "sales"},
{"id": 2, "title": "Marketing Dashboard", "slug": "marketing"}
]
mock_service.return_value.get_dashboards_with_status = AsyncMock(
side_effect=mock_get_dashboards
)
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards?env_id=prod&search=sales")
assert response.status_code == 200
data = response.json()
# Filtered by search term
# [/DEF:test_get_dashboards_with_search:Function]
# [DEF:test_get_dashboards_env_not_found:Function]
# @TEST: GET /api/dashboards returns 404 if env_id missing
# @PRE: env_id does not exist
# @POST: Returns 404 error
def test_get_dashboards_env_not_found():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
mock_config.return_value.get_environments.return_value = []
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards?env_id=nonexistent")
assert response.status_code == 404
assert "Environment not found" in response.json()["detail"]
# [/DEF:test_get_dashboards_env_not_found:Function]
# [DEF:test_get_dashboards_invalid_pagination:Function]
# @TEST: GET /api/dashboards returns 400 for invalid page/page_size
# @PRE: page < 1 or page_size > 100
# @POST: Returns 400 error
def test_get_dashboards_invalid_pagination():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
mock_perm.return_value = lambda: True
# Invalid page
response = client.get("/api/dashboards?env_id=prod&page=0")
assert response.status_code == 400
assert "Page must be >= 1" in response.json()["detail"]
# Invalid page_size
response = client.get("/api/dashboards?env_id=prod&page_size=101")
assert response.status_code == 400
assert "Page size must be between 1 and 100" in response.json()["detail"]
# [/DEF:test_get_dashboards_invalid_pagination:Function]
# [DEF:test_migrate_dashboards_success:Function]
# @TEST: POST /api/dashboards/migrate creates migration task
# @PRE: Valid source_env_id, target_env_id, dashboard_ids
# @POST: Returns task_id
def test_migrate_dashboards_success():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
# Mock environments
mock_source = MagicMock()
mock_source.id = "source"
mock_target = MagicMock()
mock_target.id = "target"
mock_config.return_value.get_environments.return_value = [mock_source, mock_target]
# Mock task manager
mock_task = MagicMock()
mock_task.id = "task-migrate-123"
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
# Mock permission
mock_perm.return_value = lambda: True
response = client.post(
"/api/dashboards/migrate",
json={
"source_env_id": "source",
"target_env_id": "target",
"dashboard_ids": [1, 2, 3],
"db_mappings": {"old_db": "new_db"}
}
)
assert response.status_code == 200
data = response.json()
assert "task_id" in data
# [/DEF:test_migrate_dashboards_success:Function]
# [DEF:test_migrate_dashboards_no_ids:Function]
# @TEST: POST /api/dashboards/migrate returns 400 for empty dashboard_ids
# @PRE: dashboard_ids is empty
# @POST: Returns 400 error
def test_migrate_dashboards_no_ids():
with patch("src.api.routes.dashboards.has_permission") as mock_perm:
mock_perm.return_value = lambda: True
response = client.post(
"/api/dashboards/migrate",
json={
"source_env_id": "source",
"target_env_id": "target",
"dashboard_ids": []
}
)
assert response.status_code == 400
assert "At least one dashboard ID must be provided" in response.json()["detail"]
# [/DEF:test_migrate_dashboards_no_ids:Function]
# [DEF:test_backup_dashboards_success:Function]
# @TEST: POST /api/dashboards/backup creates backup task
# @PRE: Valid env_id, dashboard_ids
# @POST: Returns task_id
def test_backup_dashboards_success():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
# Mock task manager
mock_task = MagicMock()
mock_task.id = "task-backup-456"
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
# Mock permission
mock_perm.return_value = lambda: True
response = client.post(
"/api/dashboards/backup",
json={
"env_id": "prod",
"dashboard_ids": [1, 2, 3],
"schedule": "0 0 * * *"
}
)
assert response.status_code == 200
data = response.json()
assert "task_id" in data
# [/DEF:test_backup_dashboards_success:Function]
# [DEF:test_get_database_mappings_success:Function]
# @TEST: GET /api/dashboards/db-mappings returns mapping suggestions
# @PRE: Valid source_env_id, target_env_id
# @POST: Returns list of database mappings
def test_get_database_mappings_success():
with patch("src.api.routes.dashboards.get_mapping_service") as mock_service, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
# Mock mapping service
mock_service.return_value.get_suggestions = AsyncMock(return_value=[
{
"source_db": "old_sales",
"target_db": "new_sales",
"source_db_uuid": "uuid-1",
"target_db_uuid": "uuid-2",
"confidence": 0.95
}
])
# Mock permission
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards/db-mappings?source_env_id=prod&target_env_id=staging")
assert response.status_code == 200
data = response.json()
assert "mappings" in data
# [/DEF:test_get_database_mappings_success:Function]
# [/DEF:backend.src.api.routes.__tests__.test_dashboards:Module]

View File

@@ -0,0 +1,209 @@
# [DEF:backend.src.api.routes.__tests__.test_datasets:Module]
# @TIER: STANDARD
# @PURPOSE: Unit tests for Datasets API endpoints
# @LAYER: API
# @RELATION: TESTS -> backend.src.api.routes.datasets
import pytest
from unittest.mock import MagicMock, patch, AsyncMock
from fastapi.testclient import TestClient
from src.app import app
from src.api.routes.datasets import DatasetsResponse, DatasetDetailResponse
client = TestClient(app)
# [DEF:test_get_datasets_success:Function]
# @TEST: GET /api/datasets returns 200 and valid schema
# @PRE: env_id exists
# @POST: Response matches DatasetsResponse schema
def test_get_datasets_success():
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
patch("src.api.routes.datasets.get_resource_service") as mock_service, \
patch("src.api.routes.datasets.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
# Mock resource service response
mock_service.return_value.get_datasets_with_status.return_value = AsyncMock()(
return_value=[
{
"id": 1,
"table_name": "sales_data",
"schema": "public",
"database": "sales_db",
"mapped_fields": {"total": 10, "mapped": 5},
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
}
]
)
# Mock permission
mock_perm.return_value = lambda: True
response = client.get("/api/datasets?env_id=prod")
assert response.status_code == 200
data = response.json()
assert "datasets" in data
assert len(data["datasets"]) >= 0
# Validate against Pydantic model
DatasetsResponse(**data)
# [/DEF:test_get_datasets_success:Function]
# [DEF:test_get_datasets_env_not_found:Function]
# @TEST: GET /api/datasets returns 404 if env_id missing
# @PRE: env_id does not exist
# @POST: Returns 404 error
def test_get_datasets_env_not_found():
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
patch("src.api.routes.datasets.has_permission") as mock_perm:
mock_config.return_value.get_environments.return_value = []
mock_perm.return_value = lambda: True
response = client.get("/api/datasets?env_id=nonexistent")
assert response.status_code == 404
assert "Environment not found" in response.json()["detail"]
# [/DEF:test_get_datasets_env_not_found:Function]
# [DEF:test_get_datasets_invalid_pagination:Function]
# @TEST: GET /api/datasets returns 400 for invalid page/page_size
# @PRE: page < 1 or page_size > 100
# @POST: Returns 400 error
def test_get_datasets_invalid_pagination():
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
patch("src.api.routes.datasets.has_permission") as mock_perm:
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
mock_perm.return_value = lambda: True
# Invalid page
response = client.get("/api/datasets?env_id=prod&page=0")
assert response.status_code == 400
assert "Page must be >= 1" in response.json()["detail"]
# Invalid page_size
response = client.get("/api/datasets?env_id=prod&page_size=0")
assert response.status_code == 400
assert "Page size must be between 1 and 100" in response.json()["detail"]
# [/DEF:test_get_datasets_invalid_pagination:Function]
# [DEF:test_map_columns_success:Function]
# @TEST: POST /api/datasets/map-columns creates mapping task
# @PRE: Valid env_id, dataset_ids, source_type
# @POST: Returns task_id
def test_map_columns_success():
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
patch("src.api.routes.datasets.get_task_manager") as mock_task_mgr, \
patch("src.api.routes.datasets.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
# Mock task manager
mock_task = MagicMock()
mock_task.id = "task-123"
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
# Mock permission
mock_perm.return_value = lambda: True
response = client.post(
"/api/datasets/map-columns",
json={
"env_id": "prod",
"dataset_ids": [1, 2, 3],
"source_type": "postgresql"
}
)
assert response.status_code == 200
data = response.json()
assert "task_id" in data
# [/DEF:test_map_columns_success:Function]
# [DEF:test_map_columns_invalid_source_type:Function]
# @TEST: POST /api/datasets/map-columns returns 400 for invalid source_type
# @PRE: source_type is not 'postgresql' or 'xlsx'
# @POST: Returns 400 error
def test_map_columns_invalid_source_type():
with patch("src.api.routes.datasets.has_permission") as mock_perm:
mock_perm.return_value = lambda: True
response = client.post(
"/api/datasets/map-columns",
json={
"env_id": "prod",
"dataset_ids": [1],
"source_type": "invalid"
}
)
assert response.status_code == 400
assert "Source type must be 'postgresql' or 'xlsx'" in response.json()["detail"]
# [/DEF:test_map_columns_invalid_source_type:Function]
# [DEF:test_generate_docs_success:Function]
# @TEST: POST /api/datasets/generate-docs creates doc generation task
# @PRE: Valid env_id, dataset_ids, llm_provider
# @POST: Returns task_id
def test_generate_docs_success():
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
patch("src.api.routes.datasets.get_task_manager") as mock_task_mgr, \
patch("src.api.routes.datasets.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
# Mock task manager
mock_task = MagicMock()
mock_task.id = "task-456"
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
# Mock permission
mock_perm.return_value = lambda: True
response = client.post(
"/api/datasets/generate-docs",
json={
"env_id": "prod",
"dataset_ids": [1],
"llm_provider": "openai"
}
)
assert response.status_code == 200
data = response.json()
assert "task_id" in data
# [/DEF:test_generate_docs_success:Function]
# [/DEF:backend.src.api.routes.__tests__.test_datasets:Module]

View File

@@ -21,8 +21,8 @@ from ...schemas.auth import (
RoleSchema, RoleCreate, RoleUpdate, PermissionSchema,
ADGroupMappingSchema, ADGroupMappingCreate
)
from ...models.auth import User, Role, Permission, ADGroupMapping
from ...dependencies import has_permission, get_current_user
from ...models.auth import User, Role, ADGroupMapping
from ...dependencies import has_permission
from ...core.logger import logger, belief_scope
# [/SECTION]

View File

@@ -11,7 +11,7 @@ from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.orm import Session
from ...core.database import get_db
from ...models.connection import ConnectionConfig
from pydantic import BaseModel, Field
from pydantic import BaseModel
from datetime import datetime
from ...core.logger import logger, belief_scope
# [/SECTION]

View File

@@ -0,0 +1,327 @@
# [DEF:backend.src.api.routes.dashboards:Module]
#
# @TIER: STANDARD
# @SEMANTICS: api, dashboards, resources, hub
# @PURPOSE: API endpoints for the Dashboard Hub - listing dashboards with Git and task status
# @LAYER: API
# @RELATION: DEPENDS_ON -> backend.src.dependencies
# @RELATION: DEPENDS_ON -> backend.src.services.resource_service
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
#
# @INVARIANT: All dashboard responses include git_status and last_task metadata
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List, Optional, Dict
from pydantic import BaseModel, Field
from ...dependencies import get_config_manager, get_task_manager, get_resource_service, get_mapping_service, has_permission
from ...core.logger import logger, belief_scope
# [/SECTION]
router = APIRouter(prefix="/api/dashboards", tags=["Dashboards"])
# [DEF:GitStatus:DataClass]
class GitStatus(BaseModel):
branch: Optional[str] = None
sync_status: Optional[str] = Field(None, pattern="^OK|DIFF$")
# [/DEF:GitStatus:DataClass]
# [DEF:LastTask:DataClass]
class LastTask(BaseModel):
task_id: Optional[str] = None
status: Optional[str] = Field(None, pattern="^RUNNING|SUCCESS|ERROR|WAITING_INPUT$")
# [/DEF:LastTask:DataClass]
# [DEF:DashboardItem:DataClass]
class DashboardItem(BaseModel):
id: int
title: str
slug: Optional[str] = None
url: Optional[str] = None
last_modified: Optional[str] = None
git_status: Optional[GitStatus] = None
last_task: Optional[LastTask] = None
# [/DEF:DashboardItem:DataClass]
# [DEF:DashboardsResponse:DataClass]
class DashboardsResponse(BaseModel):
dashboards: List[DashboardItem]
total: int
page: int
page_size: int
total_pages: int
# [/DEF:DashboardsResponse:DataClass]
# [DEF:get_dashboards:Function]
# @PURPOSE: Fetch list of dashboards from a specific environment with Git status and last task status
# @PRE: env_id must be a valid environment ID
# @PRE: page must be >= 1 if provided
# @PRE: page_size must be between 1 and 100 if provided
# @POST: Returns a list of dashboards with enhanced metadata and pagination info
# @POST: Response includes pagination metadata (page, page_size, total, total_pages)
# @PARAM: env_id (str) - The environment ID to fetch dashboards from
# @PARAM: search (Optional[str]) - Filter by title/slug
# @PARAM: page (Optional[int]) - Page number (default: 1)
# @PARAM: page_size (Optional[int]) - Items per page (default: 10, max: 100)
# @RETURN: DashboardsResponse - List of dashboards with status metadata
# @RELATION: CALLS -> ResourceService.get_dashboards_with_status
@router.get("", response_model=DashboardsResponse)
async def get_dashboards(
env_id: str,
search: Optional[str] = None,
page: int = 1,
page_size: int = 10,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
resource_service=Depends(get_resource_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_dashboards", f"env_id={env_id}, search={search}, page={page}, page_size={page_size}"):
# Validate pagination parameters
if page < 1:
logger.error(f"[get_dashboards][Coherence:Failed] Invalid page: {page}")
raise HTTPException(status_code=400, detail="Page must be >= 1")
if page_size < 1 or page_size > 100:
logger.error(f"[get_dashboards][Coherence:Failed] Invalid page_size: {page_size}")
raise HTTPException(status_code=400, detail="Page size must be between 1 and 100")
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None)
if not env:
logger.error(f"[get_dashboards][Coherence:Failed] Environment not found: {env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Get all tasks for status lookup
all_tasks = task_manager.get_all_tasks()
# Fetch dashboards with status using ResourceService
dashboards = await resource_service.get_dashboards_with_status(env, all_tasks)
# Apply search filter if provided
if search:
search_lower = search.lower()
dashboards = [
d for d in dashboards
if search_lower in d.get('title', '').lower()
or search_lower in d.get('slug', '').lower()
]
# Calculate pagination
total = len(dashboards)
total_pages = (total + page_size - 1) // page_size if total > 0 else 1
start_idx = (page - 1) * page_size
end_idx = start_idx + page_size
# Slice dashboards for current page
paginated_dashboards = dashboards[start_idx:end_idx]
logger.info(f"[get_dashboards][Coherence:OK] Returning {len(paginated_dashboards)} dashboards (page {page}/{total_pages}, total: {total})")
return DashboardsResponse(
dashboards=paginated_dashboards,
total=total,
page=page,
page_size=page_size,
total_pages=total_pages
)
except Exception as e:
logger.error(f"[get_dashboards][Coherence:Failed] Failed to fetch dashboards: {e}")
raise HTTPException(status_code=503, detail=f"Failed to fetch dashboards: {str(e)}")
# [/DEF:get_dashboards:Function]
# [DEF:MigrateRequest:DataClass]
class MigrateRequest(BaseModel):
source_env_id: str = Field(..., description="Source environment ID")
target_env_id: str = Field(..., description="Target environment ID")
dashboard_ids: List[int] = Field(..., description="List of dashboard IDs to migrate")
db_mappings: Optional[Dict[str, str]] = Field(None, description="Database mappings for migration")
replace_db_config: bool = Field(False, description="Replace database configuration")
# [/DEF:MigrateRequest:DataClass]
# [DEF:TaskResponse:DataClass]
class TaskResponse(BaseModel):
task_id: str
# [/DEF:TaskResponse:DataClass]
# [DEF:migrate_dashboards:Function]
# @PURPOSE: Trigger bulk migration of dashboards from source to target environment
# @PRE: User has permission plugin:migration:execute
# @PRE: source_env_id and target_env_id are valid environment IDs
# @PRE: dashboard_ids is a non-empty list
# @POST: Returns task_id for tracking migration progress
# @POST: Task is created and queued for execution
# @PARAM: request (MigrateRequest) - Migration request with source, target, and dashboard IDs
# @RETURN: TaskResponse - Task ID for tracking
# @RELATION: DISPATCHES -> MigrationPlugin
# @RELATION: CALLS -> task_manager.create_task
@router.post("/migrate", response_model=TaskResponse)
async def migrate_dashboards(
request: MigrateRequest,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
_ = Depends(has_permission("plugin:migration", "EXECUTE"))
):
with belief_scope("migrate_dashboards", f"source={request.source_env_id}, target={request.target_env_id}, count={len(request.dashboard_ids)}"):
# Validate request
if not request.dashboard_ids:
logger.error("[migrate_dashboards][Coherence:Failed] No dashboard IDs provided")
raise HTTPException(status_code=400, detail="At least one dashboard ID must be provided")
# Validate environments exist
environments = config_manager.get_environments()
source_env = next((e for e in environments if e.id == request.source_env_id), None)
target_env = next((e for e in environments if e.id == request.target_env_id), None)
if not source_env:
logger.error(f"[migrate_dashboards][Coherence:Failed] Source environment not found: {request.source_env_id}")
raise HTTPException(status_code=404, detail="Source environment not found")
if not target_env:
logger.error(f"[migrate_dashboards][Coherence:Failed] Target environment not found: {request.target_env_id}")
raise HTTPException(status_code=404, detail="Target environment not found")
try:
# Create migration task
task_params = {
'source_env_id': request.source_env_id,
'target_env_id': request.target_env_id,
'selected_ids': request.dashboard_ids,
'replace_db_config': request.replace_db_config,
'db_mappings': request.db_mappings or {}
}
task_obj = await task_manager.create_task(
plugin_id='superset-migration',
params=task_params
)
logger.info(f"[migrate_dashboards][Coherence:OK] Migration task created: {task_obj.id} for {len(request.dashboard_ids)} dashboards")
return TaskResponse(task_id=str(task_obj.id))
except Exception as e:
logger.error(f"[migrate_dashboards][Coherence:Failed] Failed to create migration task: {e}")
raise HTTPException(status_code=503, detail=f"Failed to create migration task: {str(e)}")
# [/DEF:migrate_dashboards:Function]
# [DEF:BackupRequest:DataClass]
class BackupRequest(BaseModel):
env_id: str = Field(..., description="Environment ID")
dashboard_ids: List[int] = Field(..., description="List of dashboard IDs to backup")
schedule: Optional[str] = Field(None, description="Cron schedule for recurring backups (e.g., '0 0 * * *')")
# [/DEF:BackupRequest:DataClass]
# [DEF:backup_dashboards:Function]
# @PURPOSE: Trigger bulk backup of dashboards with optional cron schedule
# @PRE: User has permission plugin:backup:execute
# @PRE: env_id is a valid environment ID
# @PRE: dashboard_ids is a non-empty list
# @POST: Returns task_id for tracking backup progress
# @POST: Task is created and queued for execution
# @POST: If schedule is provided, a scheduled task is created
# @PARAM: request (BackupRequest) - Backup request with environment and dashboard IDs
# @RETURN: TaskResponse - Task ID for tracking
# @RELATION: DISPATCHES -> BackupPlugin
# @RELATION: CALLS -> task_manager.create_task
@router.post("/backup", response_model=TaskResponse)
async def backup_dashboards(
request: BackupRequest,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
_ = Depends(has_permission("plugin:backup", "EXECUTE"))
):
with belief_scope("backup_dashboards", f"env={request.env_id}, count={len(request.dashboard_ids)}, schedule={request.schedule}"):
# Validate request
if not request.dashboard_ids:
logger.error("[backup_dashboards][Coherence:Failed] No dashboard IDs provided")
raise HTTPException(status_code=400, detail="At least one dashboard ID must be provided")
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == request.env_id), None)
if not env:
logger.error(f"[backup_dashboards][Coherence:Failed] Environment not found: {request.env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Create backup task
task_params = {
'env': request.env_id,
'dashboards': request.dashboard_ids,
'schedule': request.schedule
}
task_obj = await task_manager.create_task(
plugin_id='superset-backup',
params=task_params
)
logger.info(f"[backup_dashboards][Coherence:OK] Backup task created: {task_obj.id} for {len(request.dashboard_ids)} dashboards")
return TaskResponse(task_id=str(task_obj.id))
except Exception as e:
logger.error(f"[backup_dashboards][Coherence:Failed] Failed to create backup task: {e}")
raise HTTPException(status_code=503, detail=f"Failed to create backup task: {str(e)}")
# [/DEF:backup_dashboards:Function]
# [DEF:DatabaseMapping:DataClass]
class DatabaseMapping(BaseModel):
source_db: str
target_db: str
source_db_uuid: Optional[str] = None
target_db_uuid: Optional[str] = None
confidence: float
# [/DEF:DatabaseMapping:DataClass]
# [DEF:DatabaseMappingsResponse:DataClass]
class DatabaseMappingsResponse(BaseModel):
mappings: List[DatabaseMapping]
# [/DEF:DatabaseMappingsResponse:DataClass]
# [DEF:get_database_mappings:Function]
# @PURPOSE: Get database mapping suggestions between source and target environments
# @PRE: User has permission plugin:migration:read
# @PRE: source_env_id and target_env_id are valid environment IDs
# @POST: Returns list of suggested database mappings with confidence scores
# @PARAM: source_env_id (str) - Source environment ID
# @PARAM: target_env_id (str) - Target environment ID
# @RETURN: DatabaseMappingsResponse - List of suggested mappings
# @RELATION: CALLS -> MappingService.get_suggestions
@router.get("/db-mappings", response_model=DatabaseMappingsResponse)
async def get_database_mappings(
source_env_id: str,
target_env_id: str,
mapping_service=Depends(get_mapping_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_database_mappings", f"source={source_env_id}, target={target_env_id}"):
try:
# Get mapping suggestions using MappingService
suggestions = await mapping_service.get_suggestions(source_env_id, target_env_id)
# Format suggestions as DatabaseMapping objects
mappings = [
DatabaseMapping(
source_db=s.get('source_db', ''),
target_db=s.get('target_db', ''),
source_db_uuid=s.get('source_db_uuid'),
target_db_uuid=s.get('target_db_uuid'),
confidence=s.get('confidence', 0.0)
)
for s in suggestions
]
logger.info(f"[get_database_mappings][Coherence:OK] Returning {len(mappings)} database mapping suggestions")
return DatabaseMappingsResponse(mappings=mappings)
except Exception as e:
logger.error(f"[get_database_mappings][Coherence:Failed] Failed to get database mappings: {e}")
raise HTTPException(status_code=503, detail=f"Failed to get database mappings: {str(e)}")
# [/DEF:get_database_mappings:Function]
# [/DEF:backend.src.api.routes.dashboards:Module]

View File

@@ -0,0 +1,395 @@
# [DEF:backend.src.api.routes.datasets:Module]
#
# @TIER: STANDARD
# @SEMANTICS: api, datasets, resources, hub
# @PURPOSE: API endpoints for the Dataset Hub - listing datasets with mapping progress
# @LAYER: API
# @RELATION: DEPENDS_ON -> backend.src.dependencies
# @RELATION: DEPENDS_ON -> backend.src.services.resource_service
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
#
# @INVARIANT: All dataset responses include last_task metadata
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List, Optional
from pydantic import BaseModel, Field
from ...dependencies import get_config_manager, get_task_manager, get_resource_service, has_permission
from ...core.logger import logger, belief_scope
from ...core.superset_client import SupersetClient
# [/SECTION]
router = APIRouter(prefix="/api/datasets", tags=["Datasets"])
# [DEF:MappedFields:DataClass]
class MappedFields(BaseModel):
total: int
mapped: int
# [/DEF:MappedFields:DataClass]
# [DEF:LastTask:DataClass]
class LastTask(BaseModel):
task_id: Optional[str] = None
status: Optional[str] = Field(None, pattern="^RUNNING|SUCCESS|ERROR|WAITING_INPUT$")
# [/DEF:LastTask:DataClass]
# [DEF:DatasetItem:DataClass]
class DatasetItem(BaseModel):
id: int
table_name: str
schema: str
database: str
mapped_fields: Optional[MappedFields] = None
last_task: Optional[LastTask] = None
# [/DEF:DatasetItem:DataClass]
# [DEF:LinkedDashboard:DataClass]
class LinkedDashboard(BaseModel):
id: int
title: str
slug: Optional[str] = None
# [/DEF:LinkedDashboard:DataClass]
# [DEF:DatasetColumn:DataClass]
class DatasetColumn(BaseModel):
id: int
name: str
type: Optional[str] = None
is_dttm: bool = False
is_active: bool = True
description: Optional[str] = None
# [/DEF:DatasetColumn:DataClass]
# [DEF:DatasetDetailResponse:DataClass]
class DatasetDetailResponse(BaseModel):
id: int
table_name: Optional[str] = None
schema: Optional[str] = None
database: str
description: Optional[str] = None
columns: List[DatasetColumn]
column_count: int
sql: Optional[str] = None
linked_dashboards: List[LinkedDashboard]
linked_dashboard_count: int
is_sqllab_view: bool = False
created_on: Optional[str] = None
changed_on: Optional[str] = None
# [/DEF:DatasetDetailResponse:DataClass]
# [DEF:DatasetsResponse:DataClass]
class DatasetsResponse(BaseModel):
datasets: List[DatasetItem]
total: int
page: int
page_size: int
total_pages: int
# [/DEF:DatasetsResponse:DataClass]
# [DEF:TaskResponse:DataClass]
class TaskResponse(BaseModel):
task_id: str
# [/DEF:TaskResponse:DataClass]
# [DEF:get_dataset_ids:Function]
# @PURPOSE: Fetch list of all dataset IDs from a specific environment (without pagination)
# @PRE: env_id must be a valid environment ID
# @POST: Returns a list of all dataset IDs
# @PARAM: env_id (str) - The environment ID to fetch datasets from
# @PARAM: search (Optional[str]) - Filter by table name
# @RETURN: List[int] - List of dataset IDs
# @RELATION: CALLS -> ResourceService.get_datasets_with_status
@router.get("/ids")
async def get_dataset_ids(
env_id: str,
search: Optional[str] = None,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
resource_service=Depends(get_resource_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_dataset_ids", f"env_id={env_id}, search={search}"):
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None)
if not env:
logger.error(f"[get_dataset_ids][Coherence:Failed] Environment not found: {env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Get all tasks for status lookup
all_tasks = task_manager.get_all_tasks()
# Fetch datasets with status using ResourceService
datasets = await resource_service.get_datasets_with_status(env, all_tasks)
# Apply search filter if provided
if search:
search_lower = search.lower()
datasets = [
d for d in datasets
if search_lower in d.get('table_name', '').lower()
]
# Extract and return just the IDs
dataset_ids = [d['id'] for d in datasets]
logger.info(f"[get_dataset_ids][Coherence:OK] Returning {len(dataset_ids)} dataset IDs")
return {"dataset_ids": dataset_ids}
except Exception as e:
logger.error(f"[get_dataset_ids][Coherence:Failed] Failed to fetch dataset IDs: {e}")
raise HTTPException(status_code=503, detail=f"Failed to fetch dataset IDs: {str(e)}")
# [/DEF:get_dataset_ids:Function]
# [DEF:get_datasets:Function]
# @PURPOSE: Fetch list of datasets from a specific environment with mapping progress
# @PRE: env_id must be a valid environment ID
# @PRE: page must be >= 1 if provided
# @PRE: page_size must be between 1 and 100 if provided
# @POST: Returns a list of datasets with enhanced metadata and pagination info
# @POST: Response includes pagination metadata (page, page_size, total, total_pages)
# @PARAM: env_id (str) - The environment ID to fetch datasets from
# @PARAM: search (Optional[str]) - Filter by table name
# @PARAM: page (Optional[int]) - Page number (default: 1)
# @PARAM: page_size (Optional[int]) - Items per page (default: 10, max: 100)
# @RETURN: DatasetsResponse - List of datasets with status metadata
# @RELATION: CALLS -> ResourceService.get_datasets_with_status
@router.get("", response_model=DatasetsResponse)
async def get_datasets(
env_id: str,
search: Optional[str] = None,
page: int = 1,
page_size: int = 10,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
resource_service=Depends(get_resource_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_datasets", f"env_id={env_id}, search={search}, page={page}, page_size={page_size}"):
# Validate pagination parameters
if page < 1:
logger.error(f"[get_datasets][Coherence:Failed] Invalid page: {page}")
raise HTTPException(status_code=400, detail="Page must be >= 1")
if page_size < 1 or page_size > 100:
logger.error(f"[get_datasets][Coherence:Failed] Invalid page_size: {page_size}")
raise HTTPException(status_code=400, detail="Page size must be between 1 and 100")
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None)
if not env:
logger.error(f"[get_datasets][Coherence:Failed] Environment not found: {env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Get all tasks for status lookup
all_tasks = task_manager.get_all_tasks()
# Fetch datasets with status using ResourceService
datasets = await resource_service.get_datasets_with_status(env, all_tasks)
# Apply search filter if provided
if search:
search_lower = search.lower()
datasets = [
d for d in datasets
if search_lower in d.get('table_name', '').lower()
]
# Calculate pagination
total = len(datasets)
total_pages = (total + page_size - 1) // page_size if total > 0 else 1
start_idx = (page - 1) * page_size
end_idx = start_idx + page_size
# Slice datasets for current page
paginated_datasets = datasets[start_idx:end_idx]
logger.info(f"[get_datasets][Coherence:OK] Returning {len(paginated_datasets)} datasets (page {page}/{total_pages}, total: {total})")
return DatasetsResponse(
datasets=paginated_datasets,
total=total,
page=page,
page_size=page_size,
total_pages=total_pages
)
except Exception as e:
logger.error(f"[get_datasets][Coherence:Failed] Failed to fetch datasets: {e}")
raise HTTPException(status_code=503, detail=f"Failed to fetch datasets: {str(e)}")
# [/DEF:get_datasets:Function]
# [DEF:MapColumnsRequest:DataClass]
class MapColumnsRequest(BaseModel):
env_id: str = Field(..., description="Environment ID")
dataset_ids: List[int] = Field(..., description="List of dataset IDs to map")
source_type: str = Field(..., description="Source type: 'postgresql' or 'xlsx'")
connection_id: Optional[str] = Field(None, description="Connection ID for PostgreSQL source")
file_data: Optional[str] = Field(None, description="File path or data for XLSX source")
# [/DEF:MapColumnsRequest:DataClass]
# [DEF:map_columns:Function]
# @PURPOSE: Trigger bulk column mapping for datasets
# @PRE: User has permission plugin:mapper:execute
# @PRE: env_id is a valid environment ID
# @PRE: dataset_ids is a non-empty list
# @POST: Returns task_id for tracking mapping progress
# @POST: Task is created and queued for execution
# @PARAM: request (MapColumnsRequest) - Mapping request with environment and dataset IDs
# @RETURN: TaskResponse - Task ID for tracking
# @RELATION: DISPATCHES -> MapperPlugin
# @RELATION: CALLS -> task_manager.create_task
@router.post("/map-columns", response_model=TaskResponse)
async def map_columns(
request: MapColumnsRequest,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
_ = Depends(has_permission("plugin:mapper", "EXECUTE"))
):
with belief_scope("map_columns", f"env={request.env_id}, count={len(request.dataset_ids)}, source={request.source_type}"):
# Validate request
if not request.dataset_ids:
logger.error("[map_columns][Coherence:Failed] No dataset IDs provided")
raise HTTPException(status_code=400, detail="At least one dataset ID must be provided")
# Validate source type
if request.source_type not in ['postgresql', 'xlsx']:
logger.error(f"[map_columns][Coherence:Failed] Invalid source type: {request.source_type}")
raise HTTPException(status_code=400, detail="Source type must be 'postgresql' or 'xlsx'")
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == request.env_id), None)
if not env:
logger.error(f"[map_columns][Coherence:Failed] Environment not found: {request.env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Create mapping task
task_params = {
'env': request.env_id,
'dataset_id': request.dataset_ids[0] if request.dataset_ids else None,
'source': request.source_type,
'connection_id': request.connection_id,
'file_data': request.file_data
}
task_obj = await task_manager.create_task(
plugin_id='dataset-mapper',
params=task_params
)
logger.info(f"[map_columns][Coherence:OK] Mapping task created: {task_obj.id} for {len(request.dataset_ids)} datasets")
return TaskResponse(task_id=str(task_obj.id))
except Exception as e:
logger.error(f"[map_columns][Coherence:Failed] Failed to create mapping task: {e}")
raise HTTPException(status_code=503, detail=f"Failed to create mapping task: {str(e)}")
# [/DEF:map_columns:Function]
# [DEF:GenerateDocsRequest:DataClass]
class GenerateDocsRequest(BaseModel):
env_id: str = Field(..., description="Environment ID")
dataset_ids: List[int] = Field(..., description="List of dataset IDs to generate docs for")
llm_provider: str = Field(..., description="LLM provider to use")
options: Optional[dict] = Field(None, description="Additional options for documentation generation")
# [/DEF:GenerateDocsRequest:DataClass]
# [DEF:generate_docs:Function]
# @PURPOSE: Trigger bulk documentation generation for datasets
# @PRE: User has permission plugin:llm_analysis:execute
# @PRE: env_id is a valid environment ID
# @PRE: dataset_ids is a non-empty list
# @POST: Returns task_id for tracking documentation generation progress
# @POST: Task is created and queued for execution
# @PARAM: request (GenerateDocsRequest) - Documentation generation request
# @RETURN: TaskResponse - Task ID for tracking
# @RELATION: DISPATCHES -> LLMAnalysisPlugin
# @RELATION: CALLS -> task_manager.create_task
@router.post("/generate-docs", response_model=TaskResponse)
async def generate_docs(
request: GenerateDocsRequest,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
_ = Depends(has_permission("plugin:llm_analysis", "EXECUTE"))
):
with belief_scope("generate_docs", f"env={request.env_id}, count={len(request.dataset_ids)}, provider={request.llm_provider}"):
# Validate request
if not request.dataset_ids:
logger.error("[generate_docs][Coherence:Failed] No dataset IDs provided")
raise HTTPException(status_code=400, detail="At least one dataset ID must be provided")
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == request.env_id), None)
if not env:
logger.error(f"[generate_docs][Coherence:Failed] Environment not found: {request.env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Create documentation generation task
task_params = {
'environment_id': request.env_id,
'dataset_id': str(request.dataset_ids[0]) if request.dataset_ids else None,
'provider_id': request.llm_provider,
'options': request.options or {}
}
task_obj = await task_manager.create_task(
plugin_id='llm_documentation',
params=task_params
)
logger.info(f"[generate_docs][Coherence:OK] Documentation generation task created: {task_obj.id} for {len(request.dataset_ids)} datasets")
return TaskResponse(task_id=str(task_obj.id))
except Exception as e:
logger.error(f"[generate_docs][Coherence:Failed] Failed to create documentation generation task: {e}")
raise HTTPException(status_code=503, detail=f"Failed to create documentation generation task: {str(e)}")
# [/DEF:generate_docs:Function]
# [DEF:get_dataset_detail:Function]
# @PURPOSE: Get detailed dataset information including columns and linked dashboards
# @PRE: env_id is a valid environment ID
# @PRE: dataset_id is a valid dataset ID
# @POST: Returns detailed dataset info with columns and linked dashboards
# @PARAM: env_id (str) - The environment ID
# @PARAM: dataset_id (int) - The dataset ID
# @RETURN: DatasetDetailResponse - Detailed dataset information
# @RELATION: CALLS -> SupersetClient.get_dataset_detail
@router.get("/{dataset_id}", response_model=DatasetDetailResponse)
async def get_dataset_detail(
env_id: str,
dataset_id: int,
config_manager=Depends(get_config_manager),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_dataset_detail", f"env_id={env_id}, dataset_id={dataset_id}"):
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None)
if not env:
logger.error(f"[get_dataset_detail][Coherence:Failed] Environment not found: {env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Fetch detailed dataset info using SupersetClient
client = SupersetClient(env)
dataset_detail = client.get_dataset_detail(dataset_id)
logger.info(f"[get_dataset_detail][Coherence:OK] Retrieved dataset {dataset_id} with {dataset_detail['column_count']} columns and {dataset_detail['linked_dashboard_count']} linked dashboards")
return DatasetDetailResponse(**dataset_detail)
except Exception as e:
logger.error(f"[get_dataset_detail][Coherence:Failed] Failed to fetch dataset detail: {e}")
raise HTTPException(status_code=503, detail=f"Failed to fetch dataset detail: {str(e)}")
# [/DEF:get_dataset_detail:Function]
# [/DEF:backend.src.api.routes.datasets:Module]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.api.routes.environments:Module]
#
# @TIER: STANDARD
# @SEMANTICS: api, environments, superset, databases
# @PURPOSE: API endpoints for listing environments and their databases.
# @LAYER: API
@@ -10,15 +11,14 @@
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List, Dict, Optional
from typing import List, Optional
from ...dependencies import get_config_manager, get_scheduler_service, has_permission
from ...core.superset_client import SupersetClient
from pydantic import BaseModel, Field
from ...core.config_models import Environment as EnvModel
from ...core.logger import belief_scope
# [/SECTION]
router = APIRouter()
router = APIRouter(prefix="/api/environments", tags=["Environments"])
# [DEF:ScheduleSchema:DataClass]
class ScheduleSchema(BaseModel):
@@ -43,6 +43,8 @@ class DatabaseResponse(BaseModel):
# [DEF:get_environments:Function]
# @PURPOSE: List all configured environments.
# @LAYER: API
# @SEMANTICS: list, environments, config
# @PRE: config_manager is injected via Depends.
# @POST: Returns a list of EnvironmentResponse objects.
# @RETURN: List[EnvironmentResponse]
@@ -71,6 +73,8 @@ async def get_environments(
# [DEF:update_environment_schedule:Function]
# @PURPOSE: Update backup schedule for an environment.
# @LAYER: API
# @SEMANTICS: update, schedule, backup, environment
# @PRE: Environment id exists, schedule is valid ScheduleSchema.
# @POST: Backup schedule updated and scheduler reloaded.
# @PARAM: id (str) - The environment ID.
@@ -103,6 +107,8 @@ async def update_environment_schedule(
# [DEF:get_environment_databases:Function]
# @PURPOSE: Fetch the list of databases from a specific environment.
# @LAYER: API
# @SEMANTICS: fetch, databases, superset, environment
# @PRE: Environment id exists.
# @POST: Returns a list of database summaries from the environment.
# @PARAM: id (str) - The environment ID.

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.api.routes.git:Module]
#
# @TIER: STANDARD
# @SEMANTICS: git, routes, api, fastapi, repository, deployment
# @PURPOSE: Provides FastAPI endpoints for Git integration operations.
# @LAYER: API
@@ -15,17 +16,17 @@ from typing import List, Optional
import typing
from src.dependencies import get_config_manager, has_permission
from src.core.database import get_db
from src.models.git import GitServerConfig, GitStatus, DeploymentEnvironment, GitRepository
from src.models.git import GitServerConfig, GitRepository
from src.api.routes.git_schemas import (
GitServerConfigSchema, GitServerConfigCreate,
GitRepositorySchema, BranchSchema, BranchCreate,
BranchSchema, BranchCreate,
BranchCheckout, CommitSchema, CommitCreate,
DeploymentEnvironmentSchema, DeployRequest, RepoInitRequest
)
from src.services.git_service import GitService
from src.core.logger import logger, belief_scope
router = APIRouter(prefix="/api/git", tags=["git"])
router = APIRouter(tags=["git"])
git_service = GitService()
# [DEF:get_git_configs:Function]

View File

@@ -11,7 +11,6 @@
from pydantic import BaseModel, Field
from typing import List, Optional
from datetime import datetime
from uuid import UUID
from src.models.git import GitProvider, GitStatus, SyncStatus
# [DEF:GitServerConfigBase:Class]

View File

@@ -16,7 +16,7 @@ from sqlalchemy.orm import Session
# [DEF:router:Global]
# @PURPOSE: APIRouter instance for LLM routes.
router = APIRouter(prefix="/api/llm", tags=["LLM"])
router = APIRouter(tags=["LLM"])
# [/DEF:router:Global]
# [DEF:get_providers:Function]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.api.routes.mappings:Module]
#
# @TIER: STANDARD
# @SEMANTICS: api, mappings, database, fuzzy-matching
# @PURPOSE: API endpoints for managing database mappings and getting suggestions.
# @LAYER: API
@@ -20,7 +21,7 @@ from ...models.mapping import DatabaseMapping
from pydantic import BaseModel
# [/SECTION]
router = APIRouter(prefix="/api/mappings", tags=["mappings"])
router = APIRouter(tags=["mappings"])
# [DEF:MappingCreate:DataClass]
class MappingCreate(BaseModel):
@@ -30,6 +31,7 @@ class MappingCreate(BaseModel):
target_db_uuid: str
source_db_name: str
target_db_name: str
engine: Optional[str] = None
# [/DEF:MappingCreate:DataClass]
# [DEF:MappingResponse:DataClass]
@@ -41,6 +43,7 @@ class MappingResponse(BaseModel):
target_db_uuid: str
source_db_name: str
target_db_name: str
engine: Optional[str] = None
class Config:
from_attributes = True
@@ -93,6 +96,7 @@ async def create_mapping(
if existing:
existing.target_db_uuid = mapping.target_db_uuid
existing.target_db_name = mapping.target_db_name
existing.engine = mapping.engine
db.commit()
db.refresh(existing)
return existing

View File

@@ -1,4 +1,5 @@
# [DEF:backend.src.api.routes.migration:Module]
# @TIER: STANDARD
# @SEMANTICS: api, migration, dashboards
# @PURPOSE: API endpoints for migration operations.
# @LAYER: API
@@ -6,7 +7,7 @@
# @RELATION: DEPENDS_ON -> backend.src.models.dashboard
from fastapi import APIRouter, Depends, HTTPException
from typing import List, Dict
from typing import List
from ...dependencies import get_config_manager, get_task_manager, has_permission
from ...models.dashboard import DashboardMetadata, DashboardSelection
from ...core.superset_client import SupersetClient
@@ -43,7 +44,7 @@ async def get_dashboards(
# @POST: Starts the migration task and returns the task ID.
# @PARAM: selection (DashboardSelection) - The dashboards to migrate.
# @RETURN: Dict - {"task_id": str, "message": str}
@router.post("/migration/execute")
@router.post("/execute")
async def execute_migration(
selection: DashboardSelection,
config_manager=Depends(get_config_manager),

View File

@@ -1,4 +1,5 @@
# [DEF:PluginsRouter:Module]
# @TIER: STANDARD
# @SEMANTICS: api, router, plugins, list
# @PURPOSE: Defines the FastAPI router for plugin-related endpoints, allowing clients to list available plugins.
# @LAYER: UI (API)

View File

@@ -12,15 +12,24 @@
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List
from ...core.config_models import AppConfig, Environment, GlobalSettings
from pydantic import BaseModel
from ...core.config_models import AppConfig, Environment, GlobalSettings, LoggingConfig
from ...models.storage import StorageConfig
from ...dependencies import get_config_manager, has_permission
from ...core.config_manager import ConfigManager
from ...core.logger import logger, belief_scope
from ...core.superset_client import SupersetClient
import os
# [/SECTION]
# [DEF:LoggingConfigResponse:Class]
# @PURPOSE: Response model for logging configuration with current task log level.
# @SEMANTICS: logging, config, response
class LoggingConfigResponse(BaseModel):
level: str
task_log_level: str
enable_belief_state: bool
# [/DEF:LoggingConfigResponse:Class]
router = APIRouter()
# [DEF:get_settings:Function]
@@ -223,5 +232,145 @@ async def test_environment_connection(
return {"status": "error", "message": str(e)}
# [/DEF:test_environment_connection:Function]
# [DEF:get_logging_config:Function]
# @PURPOSE: Retrieves current logging configuration.
# @PRE: Config manager is available.
# @POST: Returns logging configuration.
# @RETURN: LoggingConfigResponse - The current logging config.
@router.get("/logging", response_model=LoggingConfigResponse)
async def get_logging_config(
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "READ"))
):
with belief_scope("get_logging_config"):
logging_config = config_manager.get_config().settings.logging
return LoggingConfigResponse(
level=logging_config.level,
task_log_level=logging_config.task_log_level,
enable_belief_state=logging_config.enable_belief_state
)
# [/DEF:get_logging_config:Function]
# [DEF:update_logging_config:Function]
# @PURPOSE: Updates logging configuration.
# @PRE: New logging config is provided.
# @POST: Logging configuration is updated and saved.
# @PARAM: config (LoggingConfig) - The new logging configuration.
# @RETURN: LoggingConfigResponse - The updated logging config.
@router.patch("/logging", response_model=LoggingConfigResponse)
async def update_logging_config(
config: LoggingConfig,
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "WRITE"))
):
with belief_scope("update_logging_config"):
logger.info(f"[update_logging_config][Entry] Updating logging config: level={config.level}, task_log_level={config.task_log_level}")
# Get current settings and update logging config
settings = config_manager.get_config().settings
settings.logging = config
config_manager.update_global_settings(settings)
return LoggingConfigResponse(
level=config.level,
task_log_level=config.task_log_level,
enable_belief_state=config.enable_belief_state
)
# [/DEF:update_logging_config:Function]
# [DEF:ConsolidatedSettingsResponse:Class]
class ConsolidatedSettingsResponse(BaseModel):
environments: List[dict]
connections: List[dict]
llm: dict
llm_providers: List[dict]
logging: dict
storage: dict
# [/DEF:ConsolidatedSettingsResponse:Class]
# [DEF:get_consolidated_settings:Function]
# @PURPOSE: Retrieves all settings categories in a single call
# @PRE: Config manager is available.
# @POST: Returns all consolidated settings.
# @RETURN: ConsolidatedSettingsResponse - All settings categories.
@router.get("/consolidated", response_model=ConsolidatedSettingsResponse)
async def get_consolidated_settings(
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "READ"))
):
with belief_scope("get_consolidated_settings"):
logger.info("[get_consolidated_settings][Entry] Fetching all consolidated settings")
config = config_manager.get_config()
from ...services.llm_provider import LLMProviderService
from ...core.database import SessionLocal
db = SessionLocal()
try:
llm_service = LLMProviderService(db)
providers = llm_service.get_all_providers()
llm_providers_list = [
{
"id": p.id,
"provider_type": p.provider_type,
"name": p.name,
"base_url": p.base_url,
"api_key": "********",
"default_model": p.default_model,
"is_active": p.is_active
} for p in providers
]
finally:
db.close()
return ConsolidatedSettingsResponse(
environments=[env.dict() for env in config.environments],
connections=config.settings.connections,
llm=config.settings.llm,
llm_providers=llm_providers_list,
logging=config.settings.logging.dict(),
storage=config.settings.storage.dict()
)
# [/DEF:get_consolidated_settings:Function]
# [DEF:update_consolidated_settings:Function]
# @PURPOSE: Bulk update application settings from the consolidated view.
# @PRE: User has admin permissions, config is valid.
# @POST: Settings are updated and saved via ConfigManager.
@router.patch("/consolidated")
async def update_consolidated_settings(
settings_patch: dict,
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "WRITE"))
):
with belief_scope("update_consolidated_settings"):
logger.info("[update_consolidated_settings][Entry] Applying consolidated settings patch")
current_config = config_manager.get_config()
current_settings = current_config.settings
# Update connections if provided
if "connections" in settings_patch:
current_settings.connections = settings_patch["connections"]
# Update LLM if provided
if "llm" in settings_patch:
current_settings.llm = settings_patch["llm"]
# Update Logging if provided
if "logging" in settings_patch:
current_settings.logging = LoggingConfig(**settings_patch["logging"])
# Update Storage if provided
if "storage" in settings_patch:
new_storage = StorageConfig(**settings_patch["storage"])
is_valid, message = config_manager.validate_path(new_storage.root_path)
if not is_valid:
raise HTTPException(status_code=400, detail=message)
current_settings.storage = new_storage
config_manager.update_global_settings(current_settings)
return {"status": "success", "message": "Settings updated"}
# [/DEF:update_consolidated_settings:Function]
# [/DEF:SettingsRouter:Module]

View File

@@ -1,5 +1,6 @@
# [DEF:storage_routes:Module]
#
# @TIER: STANDARD
# @SEMANTICS: storage, files, upload, download, backup, repository
# @PURPOSE: API endpoints for file storage management (backups and repositories).
# @LAYER: API

View File

@@ -1,14 +1,16 @@
# [DEF:TasksRouter:Module]
# @SEMANTICS: api, router, tasks, create, list, get
# @TIER: STANDARD
# @SEMANTICS: api, router, tasks, create, list, get, logs
# @PURPOSE: Defines the FastAPI router for task-related endpoints, allowing clients to create, list, and get the status of tasks.
# @LAYER: UI (API)
# @RELATION: Depends on the TaskManager. It is included by the main app.
from typing import List, Dict, Any, Optional
from fastapi import APIRouter, Depends, HTTPException, status
from fastapi import APIRouter, Depends, HTTPException, status, Query
from pydantic import BaseModel
from ...core.logger import belief_scope
from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry
from ...core.task_manager.models import LogFilter, LogStats
from ...dependencies import get_task_manager, has_permission, get_current_user
router = APIRouter()
@@ -116,27 +118,93 @@ async def get_task(
@router.get("/{task_id}/logs", response_model=List[LogEntry])
# [DEF:get_task_logs:Function]
# @PURPOSE: Retrieve logs for a specific task.
# @PURPOSE: Retrieve logs for a specific task with optional filtering.
# @PARAM: task_id (str) - The unique identifier of the task.
# @PARAM: level (Optional[str]) - Filter by log level (DEBUG, INFO, WARNING, ERROR).
# @PARAM: source (Optional[str]) - Filter by source component.
# @PARAM: search (Optional[str]) - Text search in message.
# @PARAM: offset (int) - Number of logs to skip.
# @PARAM: limit (int) - Maximum number of logs to return.
# @PARAM: task_manager (TaskManager) - The task manager instance.
# @PRE: task_id must exist.
# @POST: Returns a list of log entries or raises 404.
# @RETURN: List[LogEntry] - List of log entries.
# @TIER: CRITICAL
async def get_task_logs(
task_id: str,
level: Optional[str] = Query(None, description="Filter by log level (DEBUG, INFO, WARNING, ERROR)"),
source: Optional[str] = Query(None, description="Filter by source component"),
search: Optional[str] = Query(None, description="Text search in message"),
offset: int = Query(0, ge=0, description="Number of logs to skip"),
limit: int = Query(100, ge=1, le=1000, description="Maximum number of logs to return"),
task_manager: TaskManager = Depends(get_task_manager),
_ = Depends(has_permission("tasks", "READ"))
):
"""
Retrieve logs for a specific task.
Retrieve logs for a specific task with optional filtering.
Supports filtering by level, source, and text search.
"""
with belief_scope("get_task_logs"):
task = task_manager.get_task(task_id)
if not task:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
return task_manager.get_task_logs(task_id)
log_filter = LogFilter(
level=level.upper() if level else None,
source=source,
search=search,
offset=offset,
limit=limit
)
return task_manager.get_task_logs(task_id, log_filter)
# [/DEF:get_task_logs:Function]
@router.get("/{task_id}/logs/stats", response_model=LogStats)
# [DEF:get_task_log_stats:Function]
# @PURPOSE: Get statistics about logs for a task (counts by level and source).
# @PARAM: task_id (str) - The unique identifier of the task.
# @PARAM: task_manager (TaskManager) - The task manager instance.
# @PRE: task_id must exist.
# @POST: Returns log statistics or raises 404.
# @RETURN: LogStats - Statistics about task logs.
async def get_task_log_stats(
task_id: str,
task_manager: TaskManager = Depends(get_task_manager),
_ = Depends(has_permission("tasks", "READ"))
):
"""
Get statistics about logs for a task (counts by level and source).
"""
with belief_scope("get_task_log_stats"):
task = task_manager.get_task(task_id)
if not task:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
return task_manager.get_task_log_stats(task_id)
# [/DEF:get_task_log_stats:Function]
@router.get("/{task_id}/logs/sources", response_model=List[str])
# [DEF:get_task_log_sources:Function]
# @PURPOSE: Get unique sources for a task's logs.
# @PARAM: task_id (str) - The unique identifier of the task.
# @PARAM: task_manager (TaskManager) - The task manager instance.
# @PRE: task_id must exist.
# @POST: Returns list of unique source names or raises 404.
# @RETURN: List[str] - Unique source names.
async def get_task_log_sources(
task_id: str,
task_manager: TaskManager = Depends(get_task_manager),
_ = Depends(has_permission("tasks", "READ"))
):
"""
Get unique sources for a task's logs.
"""
with belief_scope("get_task_log_sources"):
task = task_manager.get_task(task_id)
if not task:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
return task_manager.get_task_log_sources(task_id)
# [/DEF:get_task_log_sources:Function]
@router.post("/{task_id}/resolve", response_model=Task)
# [DEF:resolve_task:Function]
# @PURPOSE: Resolve a task that is awaiting mapping.

View File

@@ -1,28 +1,28 @@
# [DEF:AppModule:Module]
# @TIER: CRITICAL
# @SEMANTICS: app, main, entrypoint, fastapi
# @PURPOSE: The main entry point for the FastAPI application. It initializes the app, configures CORS, sets up dependencies, includes API routers, and defines the WebSocket endpoint for log streaming.
# @LAYER: UI (API)
# @RELATION: Depends on the dependency module and API route modules.
import sys
# @INVARIANT: Only one FastAPI app instance exists per process.
# @INVARIANT: All WebSocket connections must be properly cleaned up on disconnect.
from pathlib import Path
# project_root is used for static files mounting
project_root = Path(__file__).resolve().parent.parent.parent
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends, Request, HTTPException
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Request, HTTPException
from starlette.middleware.sessions import SessionMiddleware
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse
import asyncio
import os
from .dependencies import get_task_manager, get_scheduler_service
from .core.utils.network import NetworkError
from .core.logger import logger, belief_scope
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin, llm
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin, llm, dashboards, datasets
from .api import auth
from .core.database import init_db
# [DEF:App:Global]
# @SEMANTICS: app, fastapi, instance
@@ -115,34 +115,82 @@ app.include_router(plugins.router, prefix="/api/plugins", tags=["Plugins"])
app.include_router(tasks.router, prefix="/api/tasks", tags=["Tasks"])
app.include_router(settings.router, prefix="/api/settings", tags=["Settings"])
app.include_router(connections.router, prefix="/api/settings/connections", tags=["Connections"])
app.include_router(environments.router, prefix="/api/environments", tags=["Environments"])
app.include_router(mappings.router)
app.include_router(environments.router, tags=["Environments"])
app.include_router(mappings.router, prefix="/api/mappings", tags=["Mappings"])
app.include_router(migration.router)
app.include_router(git.router)
app.include_router(llm.router)
app.include_router(git.router, prefix="/api/git", tags=["Git"])
app.include_router(llm.router, prefix="/api/llm", tags=["LLM"])
app.include_router(storage.router, prefix="/api/storage", tags=["Storage"])
app.include_router(dashboards.router)
app.include_router(datasets.router)
# [DEF:api.include_routers:Action]
# @PURPOSE: Registers all API routers with the FastAPI application.
# @LAYER: API
# @SEMANTICS: routes, registration, api
# [/DEF:api.include_routers:Action]
# [DEF:websocket_endpoint:Function]
# @PURPOSE: Provides a WebSocket endpoint for real-time log streaming of a task.
# @PURPOSE: Provides a WebSocket endpoint for real-time log streaming of a task with server-side filtering.
# @PRE: task_id must be a valid task ID.
# @POST: WebSocket connection is managed and logs are streamed until disconnect.
# @TIER: CRITICAL
# @UX_STATE: Connecting -> Streaming -> (Disconnected)
@app.websocket("/ws/logs/{task_id}")
async def websocket_endpoint(websocket: WebSocket, task_id: str):
async def websocket_endpoint(
websocket: WebSocket,
task_id: str,
source: str = None,
level: str = None
):
"""
WebSocket endpoint for real-time log streaming with optional server-side filtering.
Query Parameters:
source: Filter logs by source component (e.g., "plugin", "superset_api")
level: Filter logs by minimum level (DEBUG, INFO, WARNING, ERROR)
"""
with belief_scope("websocket_endpoint", f"task_id={task_id}"):
await websocket.accept()
logger.info(f"WebSocket connection accepted for task {task_id}")
# Normalize filter parameters
source_filter = source.lower() if source else None
level_filter = level.upper() if level else None
# Level hierarchy for filtering
level_hierarchy = {"DEBUG": 0, "INFO": 1, "WARNING": 2, "ERROR": 3}
min_level = level_hierarchy.get(level_filter, 0) if level_filter else 0
logger.info(f"WebSocket connection accepted for task {task_id} (source={source_filter}, level={level_filter})")
task_manager = get_task_manager()
queue = await task_manager.subscribe_logs(task_id)
def matches_filters(log_entry) -> bool:
"""Check if log entry matches the filter criteria."""
# Check source filter
if source_filter and log_entry.source.lower() != source_filter:
return False
# Check level filter
if level_filter:
log_level = level_hierarchy.get(log_entry.level.upper(), 0)
if log_level < min_level:
return False
return True
try:
# Stream new logs
logger.info(f"Starting log stream for task {task_id}")
# Send initial logs first to build context
# Send initial logs first to build context (apply filters)
initial_logs = task_manager.get_task_logs(task_id)
for log_entry in initial_logs:
log_dict = log_entry.dict()
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
await websocket.send_json(log_dict)
if matches_filters(log_entry):
log_dict = log_entry.dict()
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
await websocket.send_json(log_dict)
# Force a check for AWAITING_INPUT status immediately upon connection
# This ensures that if the task is already waiting when the user connects, they get the prompt.
@@ -160,6 +208,11 @@ async def websocket_endpoint(websocket: WebSocket, task_id: str):
while True:
log_entry = await queue.get()
# Apply server-side filtering
if not matches_filters(log_entry):
continue
log_dict = log_entry.dict()
log_dict['timestamp'] = log_dict['timestamp'].isoformat()
await websocket.send_json(log_dict)
@@ -187,25 +240,20 @@ async def websocket_endpoint(websocket: WebSocket, task_id: str):
frontend_path = project_root / "frontend" / "build"
if frontend_path.exists():
app.mount("/_app", StaticFiles(directory=str(frontend_path / "_app")), name="static")
# Serve other static files from the root of build directory
# [DEF:serve_spa:Function]
# @PURPOSE: Serves frontend static files or index.html for SPA routing.
# @PRE: file_path is requested by the client.
# @POST: Returns the requested file or index.html as a fallback.
@app.get("/{file_path:path}")
@app.get("/{file_path:path}", include_in_schema=False)
async def serve_spa(file_path: str):
with belief_scope("serve_spa", f"path={file_path}"):
# Don't serve SPA for API routes that fell through
if file_path.startswith("api/"):
logger.info(f"[DEBUG] API route fell through to serve_spa: {file_path}")
raise HTTPException(status_code=404, detail=f"API endpoint not found: {file_path}")
full_path = frontend_path / file_path
if full_path.is_file():
return FileResponse(str(full_path))
# Fallback to index.html for SPA routing
return FileResponse(str(frontend_path / "index.html"))
# Only serve SPA for non-API paths
# API routes are registered separately and should be matched by FastAPI first
if file_path and (file_path.startswith("api/") or file_path.startswith("/api/") or file_path == "api"):
# This should not happen if API routers are properly registered
# Return 404 instead of serving HTML
raise HTTPException(status_code=404, detail=f"API endpoint not found: {file_path}")
full_path = frontend_path / file_path
if file_path and full_path.is_file():
return FileResponse(str(full_path))
return FileResponse(str(frontend_path / "index.html"))
# [/DEF:serve_spa:Function]
else:
# [DEF:read_root:Function]

View File

@@ -0,0 +1,179 @@
# [DEF:test_auth:Module]
# @TIER: STANDARD
# @PURPOSE: Unit tests for authentication module
# @LAYER: Domain
# @RELATION: VERIFIES -> src.core.auth
import sys
from pathlib import Path
# Add src to path
sys.path.append(str(Path(__file__).parent.parent.parent.parent / "src"))
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.core.database import Base
from src.models.auth import User, Role, Permission, ADGroupMapping
from src.services.auth_service import AuthService
from src.core.auth.repository import AuthRepository
from src.core.auth.security import verify_password, get_password_hash
# Create in-memory SQLite database for testing
SQLALCHEMY_DATABASE_URL = "sqlite:///:memory:"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# Create all tables
Base.metadata.create_all(bind=engine)
@pytest.fixture
def db_session():
"""Create a new database session with a transaction, rollback after test"""
connection = engine.connect()
transaction = connection.begin()
session = TestingSessionLocal(bind=connection)
yield session
session.close()
transaction.rollback()
connection.close()
@pytest.fixture
def auth_service(db_session):
return AuthService(db_session)
@pytest.fixture
def auth_repo(db_session):
return AuthRepository(db_session)
def test_create_user(auth_repo):
"""Test user creation"""
user = User(
username="testuser",
email="test@example.com",
password_hash=get_password_hash("testpassword123"),
auth_source="LOCAL"
)
auth_repo.db.add(user)
auth_repo.db.commit()
retrieved_user = auth_repo.get_user_by_username("testuser")
assert retrieved_user is not None
assert retrieved_user.username == "testuser"
assert retrieved_user.email == "test@example.com"
assert verify_password("testpassword123", retrieved_user.password_hash)
def test_authenticate_user(auth_service, auth_repo):
"""Test user authentication with valid and invalid credentials"""
user = User(
username="testuser",
email="test@example.com",
password_hash=get_password_hash("testpassword123"),
auth_source="LOCAL"
)
auth_repo.db.add(user)
auth_repo.db.commit()
# Test valid credentials
authenticated_user = auth_service.authenticate_user("testuser", "testpassword123")
assert authenticated_user is not None
assert authenticated_user.username == "testuser"
# Test invalid password
invalid_user = auth_service.authenticate_user("testuser", "wrongpassword")
assert invalid_user is None
# Test invalid username
invalid_user = auth_service.authenticate_user("nonexistent", "testpassword123")
assert invalid_user is None
def test_create_session(auth_service, auth_repo):
"""Test session token creation"""
user = User(
username="testuser",
email="test@example.com",
password_hash=get_password_hash("testpassword123"),
auth_source="LOCAL"
)
auth_repo.db.add(user)
auth_repo.db.commit()
session = auth_service.create_session(user)
assert "access_token" in session
assert "token_type" in session
assert session["token_type"] == "bearer"
assert len(session["access_token"]) > 0
def test_role_permission_association(auth_repo):
"""Test role and permission association"""
role = Role(name="Admin", description="System administrator")
perm1 = Permission(resource="admin:users", action="READ")
perm2 = Permission(resource="admin:users", action="WRITE")
role.permissions.extend([perm1, perm2])
auth_repo.db.add(role)
auth_repo.db.commit()
retrieved_role = auth_repo.get_role_by_name("Admin")
assert retrieved_role is not None
assert len(retrieved_role.permissions) == 2
permissions = [f"{p.resource}:{p.action}" for p in retrieved_role.permissions]
assert "admin:users:READ" in permissions
assert "admin:users:WRITE" in permissions
def test_user_role_association(auth_repo):
"""Test user and role association"""
role = Role(name="Admin", description="System administrator")
user = User(
username="adminuser",
email="admin@example.com",
password_hash=get_password_hash("adminpass123"),
auth_source="LOCAL"
)
user.roles.append(role)
auth_repo.db.add(role)
auth_repo.db.add(user)
auth_repo.db.commit()
retrieved_user = auth_repo.get_user_by_username("adminuser")
assert retrieved_user is not None
assert len(retrieved_user.roles) == 1
assert retrieved_user.roles[0].name == "Admin"
def test_ad_group_mapping(auth_repo):
"""Test AD group mapping"""
role = Role(name="ADFS_Admin", description="ADFS administrators")
auth_repo.db.add(role)
auth_repo.db.commit()
mapping = ADGroupMapping(ad_group="DOMAIN\\ADFS_Admins", role_id=role.id)
auth_repo.db.add(mapping)
auth_repo.db.commit()
retrieved_mapping = auth_repo.db.query(ADGroupMapping).filter_by(ad_group="DOMAIN\\ADFS_Admins").first()
assert retrieved_mapping is not None
assert retrieved_mapping.role_id == role.id
# [/DEF:test_auth:Module]

View File

@@ -10,7 +10,6 @@
# [SECTION: IMPORTS]
from pydantic import Field
from pydantic_settings import BaseSettings
import os
# [/SECTION]
# [DEF:AuthConfig:Class]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.core.auth.jwt:Module]
#
# @TIER: STANDARD
# @SEMANTICS: jwt, token, session, auth
# @PURPOSE: JWT token generation and validation logic.
# @LAYER: Core
@@ -10,8 +11,8 @@
# [SECTION: IMPORTS]
from datetime import datetime, timedelta
from typing import Optional, List
from jose import JWTError, jwt
from typing import Optional
from jose import jwt
from .config import auth_config
from ..logger import belief_scope
# [/SECTION]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.core.auth.logger:Module]
#
# @TIER: STANDARD
# @SEMANTICS: auth, logger, audit, security
# @PURPOSE: Audit logging for security-related events.
# @LAYER: Core

View File

@@ -11,7 +11,7 @@
# [SECTION: IMPORTS]
from typing import Optional, List
from sqlalchemy.orm import Session
from ...models.auth import User, Role, Permission, ADGroupMapping
from ...models.auth import User, Role, Permission
from ..logger import belief_scope
# [/SECTION]

View File

@@ -15,7 +15,7 @@ import json
import os
from pathlib import Path
from typing import Optional, List
from .config_models import AppConfig, Environment, GlobalSettings
from .config_models import AppConfig, Environment, GlobalSettings, StorageConfig
from .logger import logger, configure_logger, belief_scope
# [/SECTION]
@@ -46,7 +46,7 @@ class ConfigManager:
# 3. Runtime check of @POST
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
logger.info(f"[ConfigManager][Exit] Initialized")
logger.info("[ConfigManager][Exit] Initialized")
# [/DEF:__init__:Function]
# [DEF:_load_config:Function]
@@ -59,7 +59,7 @@ class ConfigManager:
logger.debug(f"[_load_config][Entry] Loading from {self.config_path}")
if not self.config_path.exists():
logger.info(f"[_load_config][Action] Config file not found. Creating default.")
logger.info("[_load_config][Action] Config file not found. Creating default.")
default_config = AppConfig(
environments=[],
settings=GlobalSettings()
@@ -75,7 +75,7 @@ class ConfigManager:
del data["settings"]["backup_path"]
config = AppConfig(**data)
logger.info(f"[_load_config][Coherence:OK] Configuration loaded")
logger.info("[_load_config][Coherence:OK] Configuration loaded")
return config
except Exception as e:
logger.error(f"[_load_config][Coherence:Failed] Error loading config: {e}")
@@ -103,7 +103,7 @@ class ConfigManager:
try:
with open(self.config_path, "w") as f:
json.dump(config.dict(), f, indent=4)
logger.info(f"[_save_config_to_disk][Action] Configuration saved")
logger.info("[_save_config_to_disk][Action] Configuration saved")
except Exception as e:
logger.error(f"[_save_config_to_disk][Coherence:Failed] Failed to save: {e}")
# [/DEF:_save_config_to_disk:Function]
@@ -134,7 +134,7 @@ class ConfigManager:
# @PARAM: settings (GlobalSettings) - The new global settings.
def update_global_settings(self, settings: GlobalSettings):
with belief_scope("update_global_settings"):
logger.info(f"[update_global_settings][Entry] Updating settings")
logger.info("[update_global_settings][Entry] Updating settings")
# 1. Runtime check of @PRE
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
@@ -146,7 +146,7 @@ class ConfigManager:
# Reconfigure logger with new settings
configure_logger(settings.logging)
logger.info(f"[update_global_settings][Exit] Settings updated")
logger.info("[update_global_settings][Exit] Settings updated")
# [/DEF:update_global_settings:Function]
# [DEF:validate_path:Function]
@@ -222,7 +222,7 @@ class ConfigManager:
self.config.environments.append(env)
self.save()
logger.info(f"[add_environment][Exit] Environment added")
logger.info("[add_environment][Exit] Environment added")
# [/DEF:add_environment:Function]
# [DEF:update_environment:Function]

View File

@@ -1,4 +1,5 @@
# [DEF:ConfigModels:Module]
# @TIER: STANDARD
# @SEMANTICS: config, models, pydantic
# @PURPOSE: Defines the data models for application configuration using Pydantic.
# @LAYER: Core
@@ -34,6 +35,7 @@ class Environment(BaseModel):
# @PURPOSE: Defines the configuration for the application's logging system.
class LoggingConfig(BaseModel):
level: str = "INFO"
task_log_level: str = "INFO" # Minimum level for task-specific logs (DEBUG, INFO, WARNING, ERROR)
file_path: Optional[str] = "logs/app.log"
max_bytes: int = 10 * 1024 * 1024
backup_count: int = 5
@@ -46,6 +48,8 @@ class GlobalSettings(BaseModel):
storage: StorageConfig = Field(default_factory=StorageConfig)
default_environment_id: Optional[str] = None
logging: LoggingConfig = Field(default_factory=LoggingConfig)
connections: List[dict] = []
llm: dict = Field(default_factory=lambda: {"providers": [], "default_provider": ""})
# Task retention settings
task_retention_days: int = 30

View File

@@ -11,14 +11,9 @@
# [SECTION: IMPORTS]
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.orm import sessionmaker
from ..models.mapping import Base
# Import models to ensure they're registered with Base
from ..models.task import TaskRecord
from ..models.connection import ConnectionConfig
from ..models.git import GitServerConfig, GitRepository, DeploymentEnvironment
from ..models.auth import User, Role, Permission, ADGroupMapping
from ..models.llm import LLMProvider, ValidationRecord
from .logger import belief_scope
from .auth.config import auth_config
import os

View File

@@ -19,6 +19,9 @@ _belief_state = threading.local()
# Global flag for belief state logging
_enable_belief_state = True
# Global task log level filter
_task_log_level = "INFO"
# [DEF:BeliefFormatter:Class]
# @PURPOSE: Custom logging formatter that adds belief state prefixes to log messages.
class BeliefFormatter(logging.Formatter):
@@ -58,12 +61,12 @@ class LogEntry(BaseModel):
# @SEMANTICS: logging, context, belief_state
@contextmanager
def belief_scope(anchor_id: str, message: str = ""):
# Log Entry if enabled
# Log Entry if enabled (DEBUG level to reduce noise)
if _enable_belief_state:
entry_msg = f"[{anchor_id}][Entry]"
if message:
entry_msg += f" {message}"
logger.info(entry_msg)
logger.debug(entry_msg)
# Set thread-local anchor_id
old_anchor = getattr(_belief_state, 'anchor_id', None)
@@ -71,13 +74,13 @@ def belief_scope(anchor_id: str, message: str = ""):
try:
yield
# Log Coherence OK and Exit
logger.info(f"[{anchor_id}][Coherence:OK]")
# Log Coherence OK and Exit (DEBUG level to reduce noise)
logger.debug(f"[{anchor_id}][Coherence:OK]")
if _enable_belief_state:
logger.info(f"[{anchor_id}][Exit]")
logger.debug(f"[{anchor_id}][Exit]")
except Exception as e:
# Log Coherence Failed
logger.info(f"[{anchor_id}][Coherence:Failed] {str(e)}")
# Log Coherence Failed (DEBUG level to reduce noise)
logger.debug(f"[{anchor_id}][Coherence:Failed] {str(e)}")
raise
finally:
# Restore old anchor
@@ -88,12 +91,13 @@ def belief_scope(anchor_id: str, message: str = ""):
# [DEF:configure_logger:Function]
# @PURPOSE: Configures the logger with the provided logging settings.
# @PRE: config is a valid LoggingConfig instance.
# @POST: Logger level, handlers, and belief state flag are updated.
# @POST: Logger level, handlers, belief state flag, and task log level are updated.
# @PARAM: config (LoggingConfig) - The logging configuration.
# @SEMANTICS: logging, configuration, initialization
def configure_logger(config):
global _enable_belief_state
global _enable_belief_state, _task_log_level
_enable_belief_state = config.enable_belief_state
_task_log_level = config.task_log_level.upper()
# Set logger level
level = getattr(logging, config.level.upper(), logging.INFO)
@@ -107,7 +111,6 @@ def configure_logger(config):
# Add file handler if file_path is set
if config.file_path:
import os
from pathlib import Path
log_file = Path(config.file_path)
log_file.parent.mkdir(parents=True, exist_ok=True)
@@ -130,6 +133,36 @@ def configure_logger(config):
))
# [/DEF:configure_logger:Function]
# [DEF:get_task_log_level:Function]
# @PURPOSE: Returns the current task log level filter.
# @PRE: None.
# @POST: Returns the task log level string.
# @RETURN: str - The current task log level (DEBUG, INFO, WARNING, ERROR).
# @SEMANTICS: logging, configuration, getter
def get_task_log_level() -> str:
"""Returns the current task log level filter."""
return _task_log_level
# [/DEF:get_task_log_level:Function]
# [DEF:should_log_task_level:Function]
# @PURPOSE: Checks if a log level should be recorded based on task_log_level setting.
# @PRE: level is a valid log level string.
# @POST: Returns True if level meets or exceeds task_log_level threshold.
# @PARAM: level (str) - The log level to check.
# @RETURN: bool - True if the level should be logged.
# @SEMANTICS: logging, filter, level
def should_log_task_level(level: str) -> bool:
"""Checks if a log level should be recorded based on task_log_level setting."""
level_order = {"DEBUG": 0, "INFO": 1, "WARNING": 2, "ERROR": 3}
current_level = _task_log_level.upper()
check_level = level.upper()
current_order = level_order.get(current_level, 1) # Default to INFO
check_order = level_order.get(check_level, 1)
return check_order >= current_order
# [/DEF:should_log_task_level:Function]
# [DEF:WebSocketLogHandler:Class]
# @SEMANTICS: logging, handler, websocket, buffer
# @PURPOSE: A custom logging handler that captures log records into a buffer. It is designed to be extended for real-time log streaming over WebSockets.

View File

@@ -0,0 +1,228 @@
# [DEF:test_logger:Module]
# @TIER: STANDARD
# @PURPOSE: Unit tests for logger module
# @LAYER: Infra
# @RELATION: VERIFIES -> src.core.logger
import sys
from pathlib import Path
# Add src to path
sys.path.append(str(Path(__file__).parent.parent.parent.parent / "src"))
import pytest
from src.core.logger import (
belief_scope,
logger,
configure_logger,
get_task_log_level,
should_log_task_level
)
from src.core.config_models import LoggingConfig
# [DEF:test_belief_scope_logs_entry_action_exit_at_debug:Function]
# @PURPOSE: Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs at DEBUG level.
# @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Entry, Action, and Exit tags at DEBUG level.
def test_belief_scope_logs_entry_action_exit_at_debug(caplog):
"""Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs at DEBUG level."""
# Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("TestFunction"):
logger.info("Doing something important")
# Check that the logs contain the expected patterns
log_messages = [record.message for record in caplog.records]
assert any("[TestFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
assert any("[TestFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found"
assert any("[TestFunction][Exit]" in msg for msg in log_messages), "Exit log not found"
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_logs_entry_action_exit_at_debug:Function]
# [DEF:test_belief_scope_error_handling:Function]
# @PURPOSE: Test that belief_scope logs Coherence:Failed on exception.
# @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Coherence:Failed tag.
def test_belief_scope_error_handling(caplog):
"""Test that belief_scope logs Coherence:Failed on exception."""
# Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with pytest.raises(ValueError):
with belief_scope("FailingFunction"):
raise ValueError("Something went wrong")
log_messages = [record.message for record in caplog.records]
assert any("[FailingFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
assert any("[FailingFunction][Coherence:Failed]" in msg for msg in log_messages), "Failed coherence log not found"
# Exit should not be logged on failure
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_error_handling:Function]
# [DEF:test_belief_scope_success_coherence:Function]
# @PURPOSE: Test that belief_scope logs Coherence:OK on success.
# @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Coherence:OK tag.
def test_belief_scope_success_coherence(caplog):
"""Test that belief_scope logs Coherence:OK on success."""
# Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("SuccessFunction"):
pass
log_messages = [record.message for record in caplog.records]
assert any("[SuccessFunction][Coherence:OK]" in msg for msg in log_messages), "Success coherence log not found"
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_success_coherence:Function]
# [DEF:test_belief_scope_not_visible_at_info:Function]
# @PURPOSE: Test that belief_scope Entry/Exit/Coherence logs are NOT visible at INFO level.
# @PRE: belief_scope is available. caplog fixture is used.
# @POST: Entry/Exit/Coherence logs are not captured at INFO level.
def test_belief_scope_not_visible_at_info(caplog):
"""Test that belief_scope Entry/Exit/Coherence logs are NOT visible at INFO level."""
caplog.set_level("INFO")
with belief_scope("InfoLevelFunction"):
logger.info("Doing something important")
log_messages = [record.message for record in caplog.records]
# Action log should be visible
assert any("[InfoLevelFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found"
# Entry/Exit/Coherence should NOT be visible at INFO level
assert not any("[InfoLevelFunction][Entry]" in msg for msg in log_messages), "Entry log should not be visible at INFO"
assert not any("[InfoLevelFunction][Exit]" in msg for msg in log_messages), "Exit log should not be visible at INFO"
assert not any("[InfoLevelFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence log should not be visible at INFO"
# [/DEF:test_belief_scope_not_visible_at_info:Function]
# [DEF:test_task_log_level_default:Function]
# @PURPOSE: Test that default task log level is INFO.
# @PRE: None.
# @POST: Default level is INFO.
def test_task_log_level_default():
"""Test that default task log level is INFO."""
level = get_task_log_level()
assert level == "INFO"
# [/DEF:test_task_log_level_default:Function]
# [DEF:test_should_log_task_level:Function]
# @PURPOSE: Test that should_log_task_level correctly filters log levels.
# @PRE: None.
# @POST: Filtering works correctly for all level combinations.
def test_should_log_task_level():
"""Test that should_log_task_level correctly filters log levels."""
# Default level is INFO
assert should_log_task_level("ERROR") is True, "ERROR should be logged at INFO threshold"
assert should_log_task_level("WARNING") is True, "WARNING should be logged at INFO threshold"
assert should_log_task_level("INFO") is True, "INFO should be logged at INFO threshold"
assert should_log_task_level("DEBUG") is False, "DEBUG should NOT be logged at INFO threshold"
# [/DEF:test_should_log_task_level:Function]
# [DEF:test_configure_logger_task_log_level:Function]
# @PURPOSE: Test that configure_logger updates task_log_level.
# @PRE: LoggingConfig is available.
# @POST: task_log_level is updated correctly.
def test_configure_logger_task_log_level():
"""Test that configure_logger updates task_log_level."""
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
assert get_task_log_level() == "DEBUG", "task_log_level should be DEBUG"
assert should_log_task_level("DEBUG") is True, "DEBUG should be logged at DEBUG threshold"
# Reset to INFO
config = LoggingConfig(
level="INFO",
task_log_level="INFO",
enable_belief_state=True
)
configure_logger(config)
assert get_task_log_level() == "INFO", "task_log_level should be reset to INFO"
# [/DEF:test_configure_logger_task_log_level:Function]
# [DEF:test_enable_belief_state_flag:Function]
# @PURPOSE: Test that enable_belief_state flag controls belief_scope logging.
# @PRE: LoggingConfig is available. caplog fixture is used.
# @POST: belief_scope logs are controlled by the flag.
def test_enable_belief_state_flag(caplog):
"""Test that enable_belief_state flag controls belief_scope logging."""
# Disable belief state
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=False
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("DisabledFunction"):
logger.info("Doing something")
log_messages = [record.message for record in caplog.records]
# Entry and Exit should NOT be logged when disabled
assert not any("[DisabledFunction][Entry]" in msg for msg in log_messages), "Entry should not be logged when disabled"
assert not any("[DisabledFunction][Exit]" in msg for msg in log_messages), "Exit should not be logged when disabled"
# Coherence:OK should still be logged (internal tracking)
assert any("[DisabledFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence should still be logged"
# Re-enable for other tests
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
# [/DEF:test_enable_belief_state_flag:Function]
# [/DEF:test_logger:Module]

View File

@@ -11,12 +11,10 @@
import zipfile
import yaml
import os
import shutil
import tempfile
from pathlib import Path
from typing import Dict
from .logger import logger, belief_scope
import yaml
# [/SECTION]
# [DEF:MigrationEngine:Class]

View File

@@ -1,12 +1,12 @@
import importlib.util
import os
import sys # Added this line
from typing import Dict, Type, List, Optional
from typing import Dict, List, Optional
from .plugin_base import PluginBase, PluginConfig
from jsonschema import validate
from .logger import belief_scope
# [DEF:PluginLoader:Class]
# @TIER: STANDARD
# @SEMANTICS: plugin, loader, dynamic, import
# @PURPOSE: Scans a specified directory for Python modules, dynamically loads them, and registers any classes that are valid implementations of the PluginBase interface.
# @LAYER: Core

View File

@@ -1,4 +1,5 @@
# [DEF:SchedulerModule:Module]
# @TIER: STANDARD
# @SEMANTICS: scheduler, apscheduler, cron, backup
# @PURPOSE: Manages scheduled tasks using APScheduler.
# @LAYER: Core
@@ -9,11 +10,11 @@ from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from .logger import logger, belief_scope
from .config_manager import ConfigManager
from typing import Optional
import asyncio
# [/SECTION]
# [DEF:SchedulerService:Class]
# @TIER: STANDARD
# @SEMANTICS: scheduler, service, apscheduler
# @PURPOSE: Provides a service to manage scheduled backup tasks.
class SchedulerService:

View File

@@ -13,10 +13,10 @@
import json
import zipfile
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union, cast
from typing import Dict, List, Optional, Tuple, Union, cast
from requests import Response
from .logger import logger as app_logger, belief_scope
from .utils.network import APIClient, SupersetAPIError, AuthenticationError, DashboardNotFoundError, NetworkError
from .utils.network import APIClient, SupersetAPIError
from .utils.fileio import get_filename_from_headers
from .config_models import Environment
# [/SECTION]
@@ -87,11 +87,11 @@ class SupersetClient:
if 'columns' not in validated_query:
validated_query['columns'] = ["slug", "id", "changed_on_utc", "dashboard_title", "published"]
total_count = self._fetch_total_object_count(endpoint="/dashboard/")
paginated_data = self._fetch_all_pages(
endpoint="/dashboard/",
pagination_options={"base_query": validated_query, "total_count": total_count, "results_field": "result"},
pagination_options={"base_query": validated_query, "results_field": "result"},
)
total_count = len(paginated_data)
app_logger.info("[get_dashboards][Exit] Found %d dashboards.", total_count)
return total_count, paginated_data
# [/DEF:get_dashboards:Function]
@@ -203,15 +203,121 @@ class SupersetClient:
app_logger.info("[get_datasets][Enter] Fetching datasets.")
validated_query = self._validate_query_params(query)
total_count = self._fetch_total_object_count(endpoint="/dataset/")
paginated_data = self._fetch_all_pages(
endpoint="/dataset/",
pagination_options={"base_query": validated_query, "total_count": total_count, "results_field": "result"},
pagination_options={"base_query": validated_query, "results_field": "result"},
)
total_count = len(paginated_data)
app_logger.info("[get_datasets][Exit] Found %d datasets.", total_count)
return total_count, paginated_data
# [/DEF:get_datasets:Function]
# [DEF:get_datasets_summary:Function]
# @PURPOSE: Fetches dataset metadata optimized for the Dataset Hub grid.
# @PRE: Client is authenticated.
# @POST: Returns a list of dataset metadata summaries.
# @RETURN: List[Dict]
def get_datasets_summary(self) -> List[Dict]:
with belief_scope("SupersetClient.get_datasets_summary"):
query = {
"columns": ["id", "table_name", "schema", "database"]
}
_, datasets = self.get_datasets(query=query)
# Map fields to match the contracts
result = []
for ds in datasets:
result.append({
"id": ds.get("id"),
"table_name": ds.get("table_name"),
"schema": ds.get("schema"),
"database": ds.get("database", {}).get("database_name", "Unknown")
})
return result
# [/DEF:get_datasets_summary:Function]
# [DEF:get_dataset_detail:Function]
# @PURPOSE: Fetches detailed dataset information including columns and linked dashboards
# @PRE: Client is authenticated and dataset_id exists.
# @POST: Returns detailed dataset info with columns and linked dashboards.
# @PARAM: dataset_id (int) - The dataset ID to fetch details for.
# @RETURN: Dict - Dataset details with columns and linked_dashboards.
# @RELATION: CALLS -> self.get_dataset
# @RELATION: CALLS -> self.network.request (for related_objects)
def get_dataset_detail(self, dataset_id: int) -> Dict:
with belief_scope("SupersetClient.get_dataset_detail", f"id={dataset_id}"):
# Get base dataset info
response = self.get_dataset(dataset_id)
# If the response is a dict and has a 'result' key, use that (standard Superset API)
if isinstance(response, dict) and 'result' in response:
dataset = response['result']
else:
dataset = response
# Extract columns information
columns = dataset.get("columns", [])
column_info = []
for col in columns:
column_info.append({
"id": col.get("id"),
"name": col.get("column_name"),
"type": col.get("type"),
"is_dttm": col.get("is_dttm", False),
"is_active": col.get("is_active", True),
"description": col.get("description", "")
})
# Get linked dashboards using related_objects endpoint
linked_dashboards = []
try:
related_objects = self.network.request(
method="GET",
endpoint=f"/dataset/{dataset_id}/related_objects"
)
# Handle different response formats
if isinstance(related_objects, dict):
if "dashboards" in related_objects:
dashboards_data = related_objects["dashboards"]
elif "result" in related_objects and isinstance(related_objects["result"], dict):
dashboards_data = related_objects["result"].get("dashboards", [])
else:
dashboards_data = []
for dash in dashboards_data:
linked_dashboards.append({
"id": dash.get("id"),
"title": dash.get("dashboard_title") or dash.get("title", "Unknown"),
"slug": dash.get("slug")
})
except Exception as e:
app_logger.warning(f"[get_dataset_detail][Warning] Failed to fetch related dashboards: {e}")
linked_dashboards = []
# Extract SQL table information
sql = dataset.get("sql", "")
result = {
"id": dataset.get("id"),
"table_name": dataset.get("table_name"),
"schema": dataset.get("schema"),
"database": dataset.get("database", {}).get("database_name", "Unknown"),
"description": dataset.get("description", ""),
"columns": column_info,
"column_count": len(column_info),
"sql": sql,
"linked_dashboards": linked_dashboards,
"linked_dashboard_count": len(linked_dashboards),
"is_sqllab_view": dataset.get("is_sqllab_view", False),
"created_on": dataset.get("created_on"),
"changed_on": dataset.get("changed_on")
}
app_logger.info(f"[get_dataset_detail][Exit] Got dataset {dataset_id} with {len(column_info)} columns and {len(linked_dashboards)} linked dashboards")
return result
# [/DEF:get_dataset_detail:Function]
# [DEF:get_dataset:Function]
# @PURPOSE: Получает информацию о конкретном датасете по его ID.
# @PARAM: dataset_id (int) - ID датасета.
@@ -264,11 +370,12 @@ class SupersetClient:
validated_query = self._validate_query_params(query or {})
if 'columns' not in validated_query:
validated_query['columns'] = []
total_count = self._fetch_total_object_count(endpoint="/database/")
paginated_data = self._fetch_all_pages(
endpoint="/database/",
pagination_options={"base_query": validated_query, "total_count": total_count, "results_field": "result"},
pagination_options={"base_query": validated_query, "results_field": "result"},
)
total_count = len(paginated_data)
app_logger.info("[get_databases][Exit] Found %d databases.", total_count)
return total_count, paginated_data
# [/DEF:get_databases:Function]

View File

@@ -1,4 +1,5 @@
# [DEF:TaskManagerPackage:Module]
# @TIER: TRIVIAL
# @SEMANTICS: task, manager, package, exports
# @PURPOSE: Exports the public API of the task manager package.
# @LAYER: Core

View File

@@ -1,47 +1,75 @@
# [DEF:TaskCleanupModule:Module]
# @SEMANTICS: task, cleanup, retention
# @PURPOSE: Implements task cleanup and retention policies.
# @TIER: STANDARD
# @SEMANTICS: task, cleanup, retention, logs
# @PURPOSE: Implements task cleanup and retention policies, including associated logs.
# @LAYER: Core
# @RELATION: Uses TaskPersistenceService to delete old tasks.
# @RELATION: Uses TaskPersistenceService and TaskLogPersistenceService to delete old tasks and logs.
from datetime import datetime, timedelta
from .persistence import TaskPersistenceService
from typing import List
from .persistence import TaskPersistenceService, TaskLogPersistenceService
from ..logger import logger, belief_scope
from ..config_manager import ConfigManager
# [DEF:TaskCleanupService:Class]
# @PURPOSE: Provides methods to clean up old task records.
# @PURPOSE: Provides methods to clean up old task records and their associated logs.
# @TIER: STANDARD
class TaskCleanupService:
# [DEF:__init__:Function]
# @PURPOSE: Initializes the cleanup service with dependencies.
# @PRE: persistence_service and config_manager are valid.
# @POST: Cleanup service is ready.
def __init__(self, persistence_service: TaskPersistenceService, config_manager: ConfigManager):
def __init__(
self,
persistence_service: TaskPersistenceService,
log_persistence_service: TaskLogPersistenceService,
config_manager: ConfigManager
):
self.persistence_service = persistence_service
self.log_persistence_service = log_persistence_service
self.config_manager = config_manager
# [/DEF:__init__:Function]
# [DEF:run_cleanup:Function]
# @PURPOSE: Deletes tasks older than the configured retention period.
# @PURPOSE: Deletes tasks older than the configured retention period and their logs.
# @PRE: Config manager has valid settings.
# @POST: Old tasks are deleted from persistence.
# @POST: Old tasks and their logs are deleted from persistence.
def run_cleanup(self):
with belief_scope("TaskCleanupService.run_cleanup"):
settings = self.config_manager.get_config().settings
retention_days = settings.task_retention_days
# This is a simplified implementation.
# In a real scenario, we would query IDs of tasks older than retention_days.
# For now, we'll log the action.
logger.info(f"Cleaning up tasks older than {retention_days} days.")
# Re-loading tasks to check for limit
# Load tasks to check for limit
tasks = self.persistence_service.load_tasks(limit=1000)
if len(tasks) > settings.task_retention_limit:
to_delete = [t.id for t in tasks[settings.task_retention_limit:]]
to_delete: List[str] = [t.id for t in tasks[settings.task_retention_limit:]]
# Delete logs first (before task records)
self.log_persistence_service.delete_logs_for_tasks(to_delete)
# Then delete task records
self.persistence_service.delete_tasks(to_delete)
logger.info(f"Deleted {len(to_delete)} tasks exceeding limit of {settings.task_retention_limit}")
logger.info(f"Deleted {len(to_delete)} tasks and their logs exceeding limit of {settings.task_retention_limit}")
# [/DEF:run_cleanup:Function]
# [DEF:delete_task_with_logs:Function]
# @PURPOSE: Delete a single task and all its associated logs.
# @PRE: task_id is a valid task ID.
# @POST: Task and all its logs are deleted.
# @PARAM: task_id (str) - The task ID to delete.
def delete_task_with_logs(self, task_id: str) -> None:
"""Delete a single task and all its associated logs."""
with belief_scope("TaskCleanupService.delete_task_with_logs", f"task_id={task_id}"):
# Delete logs first
self.log_persistence_service.delete_logs_for_task(task_id)
# Then delete task record
self.persistence_service.delete_tasks([task_id])
logger.info(f"Deleted task {task_id} and its associated logs")
# [/DEF:delete_task_with_logs:Function]
# [/DEF:TaskCleanupService:Class]
# [/DEF:TaskCleanupModule:Module]

View File

@@ -0,0 +1,115 @@
# [DEF:TaskContextModule:Module]
# @SEMANTICS: task, context, plugin, execution, logger
# @PURPOSE: Provides execution context passed to plugins during task execution.
# @LAYER: Core
# @RELATION: DEPENDS_ON -> TaskLogger, USED_BY -> plugins
# @TIER: CRITICAL
# @INVARIANT: Each TaskContext is bound to a single task execution.
# [SECTION: IMPORTS]
from typing import Dict, Any, Callable
from .task_logger import TaskLogger
# [/SECTION]
# [DEF:TaskContext:Class]
# @SEMANTICS: context, task, execution, plugin
# @PURPOSE: A container passed to plugin.execute() providing the logger and other task-specific utilities.
# @TIER: CRITICAL
# @INVARIANT: logger is always a valid TaskLogger instance.
# @UX_STATE: Idle -> Active -> Complete
class TaskContext:
"""
Execution context provided to plugins during task execution.
Usage:
def execute(params: dict, context: TaskContext = None):
if context:
context.logger.info("Starting process")
context.logger.progress("Processing items", percent=50)
# ... plugin logic
"""
# [DEF:__init__:Function]
# @PURPOSE: Initialize the TaskContext with task-specific resources.
# @PRE: task_id is a valid task identifier, add_log_fn is callable.
# @POST: TaskContext is ready to be passed to plugin.execute().
# @PARAM: task_id (str) - The ID of the task.
# @PARAM: add_log_fn (Callable) - Function to add log to TaskManager.
# @PARAM: params (Dict) - Task parameters.
# @PARAM: default_source (str) - Default source for logs (default: "plugin").
def __init__(
self,
task_id: str,
add_log_fn: Callable,
params: Dict[str, Any],
default_source: str = "plugin"
):
self._task_id = task_id
self._params = params
self._logger = TaskLogger(
task_id=task_id,
add_log_fn=add_log_fn,
source=default_source
)
# [/DEF:__init__:Function]
# [DEF:task_id:Function]
# @PURPOSE: Get the task ID.
# @PRE: TaskContext must be initialized.
# @POST: Returns the task ID string.
# @RETURN: str - The task ID.
@property
def task_id(self) -> str:
return self._task_id
# [/DEF:task_id:Function]
# [DEF:logger:Function]
# @PURPOSE: Get the TaskLogger instance for this context.
# @PRE: TaskContext must be initialized.
# @POST: Returns the TaskLogger instance.
# @RETURN: TaskLogger - The logger instance.
@property
def logger(self) -> TaskLogger:
return self._logger
# [/DEF:logger:Function]
# [DEF:params:Function]
# @PURPOSE: Get the task parameters.
# @PRE: TaskContext must be initialized.
# @POST: Returns the parameters dictionary.
# @RETURN: Dict[str, Any] - The task parameters.
@property
def params(self) -> Dict[str, Any]:
return self._params
# [/DEF:params:Function]
# [DEF:get_param:Function]
# @PURPOSE: Get a specific parameter value with optional default.
# @PRE: TaskContext must be initialized.
# @POST: Returns parameter value or default.
# @PARAM: key (str) - Parameter key.
# @PARAM: default (Any) - Default value if key not found.
# @RETURN: Any - Parameter value or default.
def get_param(self, key: str, default: Any = None) -> Any:
return self._params.get(key, default)
# [/DEF:get_param:Function]
# [DEF:create_sub_context:Function]
# @PURPOSE: Create a sub-context with a different default source.
# @PRE: source is a non-empty string.
# @POST: Returns new TaskContext with different logger source.
# @PARAM: source (str) - New default source for logging.
# @RETURN: TaskContext - New context with different source.
def create_sub_context(self, source: str) -> "TaskContext":
"""Create a sub-context with a different default source for logging."""
return TaskContext(
task_id=self._task_id,
add_log_fn=self._logger._add_log,
params=self._params,
default_source=source
)
# [/DEF:create_sub_context:Function]
# [/DEF:TaskContext:Class]
# [/DEF:TaskContextModule:Module]

View File

@@ -8,23 +8,33 @@
# [SECTION: IMPORTS]
import asyncio
import threading
import inspect
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
from typing import Dict, Any, List, Optional
from concurrent.futures import ThreadPoolExecutor
from .models import Task, TaskStatus, LogEntry
from .persistence import TaskPersistenceService
from ..logger import logger, belief_scope
from .models import Task, TaskStatus, LogEntry, LogFilter, LogStats
from .persistence import TaskPersistenceService, TaskLogPersistenceService
from .context import TaskContext
from ..logger import logger, belief_scope, should_log_task_level
# [/SECTION]
# [DEF:TaskManager:Class]
# @SEMANTICS: task, manager, lifecycle, execution, state
# @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking.
# @TIER: CRITICAL
# @INVARIANT: Task IDs are unique within the registry.
# @INVARIANT: Each task has exactly one status at any time.
# @INVARIANT: Log entries are never deleted after being added to a task.
class TaskManager:
"""
Manages the lifecycle of tasks, including their creation, execution, and state tracking.
"""
# Log flush interval in seconds
LOG_FLUSH_INTERVAL = 2.0
# [DEF:__init__:Function]
# @PURPOSE: Initialize the TaskManager with dependencies.
# @PRE: plugin_loader is initialized.
@@ -35,8 +45,18 @@ class TaskManager:
self.plugin_loader = plugin_loader
self.tasks: Dict[str, Task] = {}
self.subscribers: Dict[str, List[asyncio.Queue]] = {}
self.executor = ThreadPoolExecutor(max_workers=5) # For CPU-bound plugin execution
self.executor = ThreadPoolExecutor(max_workers=5) # For CPU-bound plugin execution
self.persistence_service = TaskPersistenceService()
self.log_persistence_service = TaskLogPersistenceService()
# Log buffer: task_id -> List[LogEntry]
self._log_buffer: Dict[str, List[LogEntry]] = {}
self._log_buffer_lock = threading.Lock()
# Flusher thread for batch writing logs
self._flusher_stop_event = threading.Event()
self._flusher_thread = threading.Thread(target=self._flusher_loop, daemon=True)
self._flusher_thread.start()
try:
self.loop = asyncio.get_running_loop()
@@ -47,6 +67,59 @@ class TaskManager:
# Load persisted tasks on startup
self.load_persisted_tasks()
# [/DEF:__init__:Function]
# [DEF:_flusher_loop:Function]
# @PURPOSE: Background thread that periodically flushes log buffer to database.
# @PRE: TaskManager is initialized.
# @POST: Logs are batch-written to database every LOG_FLUSH_INTERVAL seconds.
def _flusher_loop(self):
"""Background thread that flushes log buffer to database."""
while not self._flusher_stop_event.is_set():
self._flush_logs()
self._flusher_stop_event.wait(self.LOG_FLUSH_INTERVAL)
# [/DEF:_flusher_loop:Function]
# [DEF:_flush_logs:Function]
# @PURPOSE: Flush all buffered logs to the database.
# @PRE: None.
# @POST: All buffered logs are written to task_logs table.
def _flush_logs(self):
"""Flush all buffered logs to the database."""
with self._log_buffer_lock:
task_ids = list(self._log_buffer.keys())
for task_id in task_ids:
with self._log_buffer_lock:
logs = self._log_buffer.pop(task_id, [])
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
# Re-add logs to buffer on failure
with self._log_buffer_lock:
if task_id not in self._log_buffer:
self._log_buffer[task_id] = []
self._log_buffer[task_id].extend(logs)
# [/DEF:_flush_logs:Function]
# [DEF:_flush_task_logs:Function]
# @PURPOSE: Flush logs for a specific task immediately.
# @PRE: task_id exists.
# @POST: Task's buffered logs are written to database.
# @PARAM: task_id (str) - The task ID.
def _flush_task_logs(self, task_id: str):
"""Flush logs for a specific task immediately."""
with self._log_buffer_lock:
logs = self._log_buffer.pop(task_id, [])
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
# [/DEF:_flush_task_logs:Function]
# [DEF:create_task:Function]
# @PURPOSE: Creates and queues a new task for execution.
@@ -63,7 +136,7 @@ class TaskManager:
logger.error(f"Plugin with ID '{plugin_id}' not found.")
raise ValueError(f"Plugin with ID '{plugin_id}' not found.")
plugin = self.plugin_loader.get_plugin(plugin_id)
self.plugin_loader.get_plugin(plugin_id)
if not isinstance(params, dict):
logger.error("Task parameters must be a dictionary.")
@@ -78,7 +151,7 @@ class TaskManager:
# [/DEF:create_task:Function]
# [DEF:_run_task:Function]
# @PURPOSE: Internal method to execute a task.
# @PURPOSE: Internal method to execute a task with TaskContext support.
# @PRE: Task exists in registry.
# @POST: Task is executed, status updated to SUCCESS or FAILED.
# @PARAM: task_id (str) - The ID of the task to run.
@@ -91,30 +164,54 @@ class TaskManager:
task.status = TaskStatus.RUNNING
task.started_at = datetime.utcnow()
self.persistence_service.persist_task(task)
self._add_log(task_id, "INFO", f"Task started for plugin '{plugin.name}'")
self._add_log(task_id, "INFO", f"Task started for plugin '{plugin.name}'", source="system")
try:
# Execute plugin
# Prepare params and check if plugin supports new TaskContext
params = {**task.params, "_task_id": task_id}
if asyncio.iscoroutinefunction(plugin.execute):
task.result = await plugin.execute(params)
else:
task.result = await self.loop.run_in_executor(
self.executor,
plugin.execute,
params
# Check if plugin's execute method accepts 'context' parameter
sig = inspect.signature(plugin.execute)
accepts_context = 'context' in sig.parameters
if accepts_context:
# Create TaskContext for new-style plugins
context = TaskContext(
task_id=task_id,
add_log_fn=self._add_log,
params=params,
default_source="plugin"
)
if asyncio.iscoroutinefunction(plugin.execute):
task.result = await plugin.execute(params, context=context)
else:
task.result = await self.loop.run_in_executor(
self.executor,
lambda: plugin.execute(params, context=context)
)
else:
# Backward compatibility: old-style plugins without context
if asyncio.iscoroutinefunction(plugin.execute):
task.result = await plugin.execute(params)
else:
task.result = await self.loop.run_in_executor(
self.executor,
plugin.execute,
params
)
logger.info(f"Task {task_id} completed successfully")
task.status = TaskStatus.SUCCESS
self._add_log(task_id, "INFO", f"Task completed successfully for plugin '{plugin.name}'")
self._add_log(task_id, "INFO", f"Task completed successfully for plugin '{plugin.name}'", source="system")
except Exception as e:
logger.error(f"Task {task_id} failed: {e}")
task.status = TaskStatus.FAILED
self._add_log(task_id, "ERROR", f"Task failed: {e}", {"error_type": type(e).__name__})
self._add_log(task_id, "ERROR", f"Task failed: {e}", source="system", metadata={"error_type": type(e).__name__})
finally:
task.finished_at = datetime.utcnow()
# Flush any remaining buffered logs before persisting task
self._flush_task_logs(task_id)
self.persistence_service.persist_task(task)
logger.info(f"Task {task_id} execution finished with status: {task.status}")
# [/DEF:_run_task:Function]
@@ -151,7 +248,8 @@ class TaskManager:
async def wait_for_resolution(self, task_id: str):
with belief_scope("TaskManager.wait_for_resolution", f"task_id={task_id}"):
task = self.tasks.get(task_id)
if not task: return
if not task:
return
task.status = TaskStatus.AWAITING_MAPPING
self.persistence_service.persist_task(task)
@@ -172,7 +270,8 @@ class TaskManager:
async def wait_for_input(self, task_id: str):
with belief_scope("TaskManager.wait_for_input", f"task_id={task_id}"):
task = self.tasks.get(task_id)
if not task: return
if not task:
return
# Status is already set to AWAITING_INPUT by await_input()
self.task_futures[task_id] = self.loop.create_future()
@@ -224,36 +323,106 @@ class TaskManager:
# [/DEF:get_tasks:Function]
# [DEF:get_task_logs:Function]
# @PURPOSE: Retrieves logs for a specific task.
# @PURPOSE: Retrieves logs for a specific task (from memory for running, persistence for completed).
# @PRE: task_id is a string.
# @POST: Returns list of LogEntry objects.
# @POST: Returns list of LogEntry or TaskLog objects.
# @PARAM: task_id (str) - ID of the task.
# @PARAM: log_filter (Optional[LogFilter]) - Filter parameters.
# @RETURN: List[LogEntry] - List of log entries.
def get_task_logs(self, task_id: str) -> List[LogEntry]:
def get_task_logs(self, task_id: str, log_filter: Optional[LogFilter] = None) -> List[LogEntry]:
with belief_scope("TaskManager.get_task_logs", f"task_id={task_id}"):
task = self.tasks.get(task_id)
# For completed tasks, fetch from persistence
if task and task.status in [TaskStatus.SUCCESS, TaskStatus.FAILED]:
if log_filter is None:
log_filter = LogFilter()
task_logs = self.log_persistence_service.get_logs(task_id, log_filter)
# Convert TaskLog to LogEntry for backward compatibility
return [
LogEntry(
timestamp=log.timestamp,
level=log.level,
message=log.message,
source=log.source,
metadata=log.metadata
)
for log in task_logs
]
# For running/pending tasks, return from memory
return task.logs if task else []
# [/DEF:get_task_logs:Function]
# [DEF:get_task_log_stats:Function]
# @PURPOSE: Get statistics about logs for a task.
# @PRE: task_id is a valid task ID.
# @POST: Returns LogStats with counts by level and source.
# @PARAM: task_id (str) - The task ID.
# @RETURN: LogStats - Statistics about task logs.
def get_task_log_stats(self, task_id: str) -> LogStats:
with belief_scope("TaskManager.get_task_log_stats", f"task_id={task_id}"):
return self.log_persistence_service.get_log_stats(task_id)
# [/DEF:get_task_log_stats:Function]
# [DEF:get_task_log_sources:Function]
# @PURPOSE: Get unique sources for a task's logs.
# @PRE: task_id is a valid task ID.
# @POST: Returns list of unique source strings.
# @PARAM: task_id (str) - The task ID.
# @RETURN: List[str] - Unique source names.
def get_task_log_sources(self, task_id: str) -> List[str]:
with belief_scope("TaskManager.get_task_log_sources", f"task_id={task_id}"):
return self.log_persistence_service.get_sources(task_id)
# [/DEF:get_task_log_sources:Function]
# [DEF:_add_log:Function]
# @PURPOSE: Adds a log entry to a task and notifies subscribers.
# @PURPOSE: Adds a log entry to a task buffer and notifies subscribers.
# @PRE: Task exists.
# @POST: Log added to task and pushed to queues.
# @POST: Log added to buffer and pushed to queues (if level meets task_log_level filter).
# @PARAM: task_id (str) - ID of the task.
# @PARAM: level (str) - Log level.
# @PARAM: message (str) - Log message.
# @PARAM: context (Optional[Dict]) - Log context.
def _add_log(self, task_id: str, level: str, message: str, context: Optional[Dict[str, Any]] = None):
# @PARAM: source (str) - Source component (default: "system").
# @PARAM: metadata (Optional[Dict]) - Additional structured data.
# @PARAM: context (Optional[Dict]) - Legacy context (for backward compatibility).
def _add_log(
self,
task_id: str,
level: str,
message: str,
source: str = "system",
metadata: Optional[Dict[str, Any]] = None,
context: Optional[Dict[str, Any]] = None
):
with belief_scope("TaskManager._add_log", f"task_id={task_id}"):
task = self.tasks.get(task_id)
if not task:
return
log_entry = LogEntry(level=level, message=message, context=context)
task.logs.append(log_entry)
self.persistence_service.persist_task(task)
# Filter logs based on task_log_level configuration
if not should_log_task_level(level):
return
# Notify subscribers
# Create log entry with new fields
log_entry = LogEntry(
level=level,
message=message,
source=source,
metadata=metadata,
context=context # Keep for backward compatibility
)
# Add to in-memory logs (for backward compatibility with legacy JSON field)
task.logs.append(log_entry)
# Add to buffer for batch persistence
with self._log_buffer_lock:
if task_id not in self._log_buffer:
self._log_buffer[task_id] = []
self._log_buffer[task_id].append(log_entry)
# Notify subscribers (for real-time WebSocket updates)
if task_id in self.subscribers:
for queue in self.subscribers[task_id]:
self.loop.call_soon_threadsafe(queue.put_nowait, log_entry)
@@ -353,7 +522,7 @@ class TaskManager:
# [/DEF:resume_task_with_password:Function]
# [DEF:clear_tasks:Function]
# @PURPOSE: Clears tasks based on status filter.
# @PURPOSE: Clears tasks based on status filter (also deletes associated logs).
# @PRE: status is Optional[TaskStatus].
# @POST: Tasks matching filter (or all non-active) cleared from registry and database.
# @PARAM: status (Optional[TaskStatus]) - Filter by task status.
@@ -387,9 +556,13 @@ class TaskManager:
del self.tasks[tid]
# Remove from persistence
# Remove from persistence (task_records and task_logs via CASCADE)
self.persistence_service.delete_tasks(tasks_to_remove)
# Also explicitly delete logs (in case CASCADE is not set up)
if tasks_to_remove:
self.log_persistence_service.delete_logs_for_tasks(tasks_to_remove)
logger.info(f"Cleared {len(tasks_to_remove)} tasks.")
return len(tasks_to_remove)
# [/DEF:clear_tasks:Function]

View File

@@ -1,4 +1,5 @@
# [DEF:TaskManagerModels:Module]
# @TIER: STANDARD
# @SEMANTICS: task, models, pydantic, enum, state
# @PURPOSE: Defines the data models and enumerations used by the Task Manager.
# @LAYER: Core
@@ -16,6 +17,7 @@ from pydantic import BaseModel, Field
# [/SECTION]
# [DEF:TaskStatus:Enum]
# @TIER: TRIVIAL
# @SEMANTICS: task, status, state, enum
# @PURPOSE: Defines the possible states a task can be in during its lifecycle.
class TaskStatus(str, Enum):
@@ -27,17 +29,73 @@ class TaskStatus(str, Enum):
AWAITING_INPUT = "AWAITING_INPUT"
# [/DEF:TaskStatus:Enum]
# [DEF:LogLevel:Enum]
# @SEMANTICS: log, level, severity, enum
# @PURPOSE: Defines the possible log levels for task logging.
# @TIER: STANDARD
class LogLevel(str, Enum):
DEBUG = "DEBUG"
INFO = "INFO"
WARNING = "WARNING"
ERROR = "ERROR"
# [/DEF:LogLevel:Enum]
# [DEF:LogEntry:Class]
# @SEMANTICS: log, entry, record, pydantic
# @PURPOSE: A Pydantic model representing a single, structured log entry associated with a task.
# @TIER: CRITICAL
# @INVARIANT: Each log entry has a unique timestamp and source.
class LogEntry(BaseModel):
timestamp: datetime = Field(default_factory=datetime.utcnow)
level: str
level: str = Field(default="INFO")
message: str
context: Optional[Dict[str, Any]] = None
source: str = Field(default="system") # Component attribution: plugin, superset_api, git, etc.
context: Optional[Dict[str, Any]] = None # Legacy field, kept for backward compatibility
metadata: Optional[Dict[str, Any]] = None # Structured metadata (e.g., dashboard_id, progress)
# [/DEF:LogEntry:Class]
# [DEF:TaskLog:Class]
# @SEMANTICS: task, log, persistent, pydantic
# @PURPOSE: A Pydantic model representing a persisted log entry from the database.
# @TIER: STANDARD
# @RELATION: MAPS_TO -> TaskLogRecord
class TaskLog(BaseModel):
id: int
task_id: str
timestamp: datetime
level: str
source: str
message: str
metadata: Optional[Dict[str, Any]] = None
class Config:
from_attributes = True
# [/DEF:TaskLog:Class]
# [DEF:LogFilter:Class]
# @SEMANTICS: log, filter, query, pydantic
# @PURPOSE: Filter parameters for querying task logs.
# @TIER: STANDARD
class LogFilter(BaseModel):
level: Optional[str] = None # Filter by log level
source: Optional[str] = None # Filter by source component
search: Optional[str] = None # Text search in message
offset: int = Field(default=0, ge=0)
limit: int = Field(default=100, ge=1, le=1000)
# [/DEF:LogFilter:Class]
# [DEF:LogStats:Class]
# @SEMANTICS: log, stats, aggregation, pydantic
# @PURPOSE: Statistics about log entries for a task.
# @TIER: STANDARD
class LogStats(BaseModel):
total_count: int
by_level: Dict[str, int] # {"INFO": 10, "ERROR": 2}
by_source: Dict[str, int] # {"plugin": 5, "superset_api": 7}
# [/DEF:LogStats:Class]
# [DEF:Task:Class]
# @TIER: STANDARD
# @SEMANTICS: task, job, execution, state, pydantic
# @PURPOSE: A Pydantic model representing a single execution instance of a plugin, including its status, parameters, and logs.
class Task(BaseModel):

View File

@@ -7,13 +7,13 @@
# [SECTION: IMPORTS]
from datetime import datetime
from typing import List, Optional, Dict, Any
from typing import List, Optional
import json
from sqlalchemy.orm import Session
from ...models.task import TaskRecord
from ...models.task import TaskRecord, TaskLogRecord
from ..database import TasksSessionLocal
from .models import Task, TaskStatus, LogEntry
from .models import Task, TaskStatus, LogEntry, TaskLog, LogFilter, LogStats
from ..logger import logger, belief_scope
# [/SECTION]
@@ -170,4 +170,215 @@ class TaskPersistenceService:
# [/DEF:delete_tasks:Function]
# [/DEF:TaskPersistenceService:Class]
# [DEF:TaskLogPersistenceService:Class]
# @SEMANTICS: persistence, service, database, log, sqlalchemy
# @PURPOSE: Provides methods to save and query task logs from the task_logs table.
# @TIER: CRITICAL
# @RELATION: DEPENDS_ON -> TaskLogRecord
# @INVARIANT: Log entries are batch-inserted for performance.
class TaskLogPersistenceService:
"""
Service for persisting and querying task logs.
Supports batch inserts, filtering, and statistics.
"""
# [DEF:__init__:Function]
# @PURPOSE: Initialize the log persistence service.
# @POST: Service is ready.
def __init__(self):
pass
# [/DEF:__init__:Function]
# [DEF:add_logs:Function]
# @PURPOSE: Batch insert log entries for a task.
# @PRE: logs is a list of LogEntry objects.
# @POST: All logs inserted into task_logs table.
# @PARAM: task_id (str) - The task ID.
# @PARAM: logs (List[LogEntry]) - Log entries to insert.
# @SIDE_EFFECT: Writes to task_logs table.
def add_logs(self, task_id: str, logs: List[LogEntry]) -> None:
if not logs:
return
with belief_scope("TaskLogPersistenceService.add_logs", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
for log in logs:
record = TaskLogRecord(
task_id=task_id,
timestamp=log.timestamp,
level=log.level,
source=log.source or "system",
message=log.message,
metadata_json=json.dumps(log.metadata) if log.metadata else None
)
session.add(record)
session.commit()
except Exception as e:
session.rollback()
logger.error(f"Failed to add logs for task {task_id}: {e}")
finally:
session.close()
# [/DEF:add_logs:Function]
# [DEF:get_logs:Function]
# @PURPOSE: Query logs for a task with filtering and pagination.
# @PRE: task_id is a valid task ID.
# @POST: Returns list of TaskLog objects matching filters.
# @PARAM: task_id (str) - The task ID.
# @PARAM: log_filter (LogFilter) - Filter parameters.
# @RETURN: List[TaskLog] - Filtered log entries.
def get_logs(self, task_id: str, log_filter: LogFilter) -> List[TaskLog]:
with belief_scope("TaskLogPersistenceService.get_logs", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
query = session.query(TaskLogRecord).filter(TaskLogRecord.task_id == task_id)
# Apply filters
if log_filter.level:
query = query.filter(TaskLogRecord.level == log_filter.level.upper())
if log_filter.source:
query = query.filter(TaskLogRecord.source == log_filter.source)
if log_filter.search:
search_pattern = f"%{log_filter.search}%"
query = query.filter(TaskLogRecord.message.ilike(search_pattern))
# Order by timestamp ascending (oldest first)
query = query.order_by(TaskLogRecord.timestamp.asc())
# Apply pagination
records = query.offset(log_filter.offset).limit(log_filter.limit).all()
logs = []
for record in records:
metadata = None
if record.metadata_json:
try:
metadata = json.loads(record.metadata_json)
except json.JSONDecodeError:
metadata = None
logs.append(TaskLog(
id=record.id,
task_id=record.task_id,
timestamp=record.timestamp,
level=record.level,
source=record.source,
message=record.message,
metadata=metadata
))
return logs
finally:
session.close()
# [/DEF:get_logs:Function]
# [DEF:get_log_stats:Function]
# @PURPOSE: Get statistics about logs for a task.
# @PRE: task_id is a valid task ID.
# @POST: Returns LogStats with counts by level and source.
# @PARAM: task_id (str) - The task ID.
# @RETURN: LogStats - Statistics about task logs.
def get_log_stats(self, task_id: str) -> LogStats:
with belief_scope("TaskLogPersistenceService.get_log_stats", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
# Get total count
total_count = session.query(TaskLogRecord).filter(
TaskLogRecord.task_id == task_id
).count()
# Get counts by level
from sqlalchemy import func
level_counts = session.query(
TaskLogRecord.level,
func.count(TaskLogRecord.id)
).filter(
TaskLogRecord.task_id == task_id
).group_by(TaskLogRecord.level).all()
by_level = {level: count for level, count in level_counts}
# Get counts by source
source_counts = session.query(
TaskLogRecord.source,
func.count(TaskLogRecord.id)
).filter(
TaskLogRecord.task_id == task_id
).group_by(TaskLogRecord.source).all()
by_source = {source: count for source, count in source_counts}
return LogStats(
total_count=total_count,
by_level=by_level,
by_source=by_source
)
finally:
session.close()
# [/DEF:get_log_stats:Function]
# [DEF:get_sources:Function]
# @PURPOSE: Get unique sources for a task's logs.
# @PRE: task_id is a valid task ID.
# @POST: Returns list of unique source strings.
# @PARAM: task_id (str) - The task ID.
# @RETURN: List[str] - Unique source names.
def get_sources(self, task_id: str) -> List[str]:
with belief_scope("TaskLogPersistenceService.get_sources", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
from sqlalchemy import distinct
sources = session.query(distinct(TaskLogRecord.source)).filter(
TaskLogRecord.task_id == task_id
).all()
return [s[0] for s in sources]
finally:
session.close()
# [/DEF:get_sources:Function]
# [DEF:delete_logs_for_task:Function]
# @PURPOSE: Delete all logs for a specific task.
# @PRE: task_id is a valid task ID.
# @POST: All logs for the task are deleted.
# @PARAM: task_id (str) - The task ID.
# @SIDE_EFFECT: Deletes from task_logs table.
def delete_logs_for_task(self, task_id: str) -> None:
with belief_scope("TaskLogPersistenceService.delete_logs_for_task", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
session.query(TaskLogRecord).filter(
TaskLogRecord.task_id == task_id
).delete(synchronize_session=False)
session.commit()
except Exception as e:
session.rollback()
logger.error(f"Failed to delete logs for task {task_id}: {e}")
finally:
session.close()
# [/DEF:delete_logs_for_task:Function]
# [DEF:delete_logs_for_tasks:Function]
# @PURPOSE: Delete all logs for multiple tasks.
# @PRE: task_ids is a list of task IDs.
# @POST: All logs for the tasks are deleted.
# @PARAM: task_ids (List[str]) - List of task IDs.
def delete_logs_for_tasks(self, task_ids: List[str]) -> None:
if not task_ids:
return
with belief_scope("TaskLogPersistenceService.delete_logs_for_tasks"):
session: Session = TasksSessionLocal()
try:
session.query(TaskLogRecord).filter(
TaskLogRecord.task_id.in_(task_ids)
).delete(synchronize_session=False)
session.commit()
except Exception as e:
session.rollback()
logger.error(f"Failed to delete logs for tasks: {e}")
finally:
session.close()
# [/DEF:delete_logs_for_tasks:Function]
# [/DEF:TaskLogPersistenceService:Class]
# [/DEF:TaskPersistenceModule:Module]

View File

@@ -0,0 +1,167 @@
# [DEF:TaskLoggerModule:Module]
# @SEMANTICS: task, logger, context, plugin, attribution
# @PURPOSE: Provides a dedicated logger for tasks with automatic source attribution.
# @LAYER: Core
# @RELATION: DEPENDS_ON -> TaskManager, CALLS -> TaskManager._add_log
# @TIER: CRITICAL
# @INVARIANT: Each TaskLogger instance is bound to a specific task_id and default source.
# [SECTION: IMPORTS]
from typing import Dict, Any, Optional, Callable
# [/SECTION]
# [DEF:TaskLogger:Class]
# @SEMANTICS: logger, task, source, attribution
# @PURPOSE: A wrapper around TaskManager._add_log that carries task_id and source context.
# @TIER: CRITICAL
# @INVARIANT: All log calls include the task_id and source.
# @UX_STATE: Idle -> Logging -> (system records log)
class TaskLogger:
"""
A dedicated logger for tasks that automatically tags logs with source attribution.
Usage:
logger = TaskLogger(task_id="abc123", add_log_fn=task_manager._add_log, source="plugin")
logger.info("Starting backup process")
logger.error("Failed to connect", metadata={"error_code": 500})
# Create sub-logger with different source
api_logger = logger.with_source("superset_api")
api_logger.info("Fetching dashboards")
"""
# [DEF:__init__:Function]
# @PURPOSE: Initialize the TaskLogger with task context.
# @PRE: add_log_fn is a callable that accepts (task_id, level, message, context, source, metadata).
# @POST: TaskLogger is ready to log messages.
# @PARAM: task_id (str) - The ID of the task.
# @PARAM: add_log_fn (Callable) - Function to add log to TaskManager.
# @PARAM: source (str) - Default source for logs (default: "plugin").
def __init__(
self,
task_id: str,
add_log_fn: Callable,
source: str = "plugin"
):
self._task_id = task_id
self._add_log = add_log_fn
self._default_source = source
# [/DEF:__init__:Function]
# [DEF:with_source:Function]
# @PURPOSE: Create a sub-logger with a different default source.
# @PRE: source is a non-empty string.
# @POST: Returns new TaskLogger with the same task_id but different source.
# @PARAM: source (str) - New default source.
# @RETURN: TaskLogger - New logger instance.
def with_source(self, source: str) -> "TaskLogger":
"""Create a sub-logger with a different source context."""
return TaskLogger(
task_id=self._task_id,
add_log_fn=self._add_log,
source=source
)
# [/DEF:with_source:Function]
# [DEF:_log:Function]
# @PURPOSE: Internal method to log a message at a given level.
# @PRE: level is a valid log level string.
# @POST: Log entry added via add_log_fn.
# @PARAM: level (str) - Log level (DEBUG, INFO, WARNING, ERROR).
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source for this log entry.
# @PARAM: metadata (Optional[Dict]) - Additional structured data.
def _log(
self,
level: str,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
"""Internal logging method."""
self._add_log(
task_id=self._task_id,
level=level,
message=message,
source=source or self._default_source,
metadata=metadata
)
# [/DEF:_log:Function]
# [DEF:debug:Function]
# @PURPOSE: Log a DEBUG level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def debug(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("DEBUG", message, source, metadata)
# [/DEF:debug:Function]
# [DEF:info:Function]
# @PURPOSE: Log an INFO level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def info(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("INFO", message, source, metadata)
# [/DEF:info:Function]
# [DEF:warning:Function]
# @PURPOSE: Log a WARNING level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def warning(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("WARNING", message, source, metadata)
# [/DEF:warning:Function]
# [DEF:error:Function]
# @PURPOSE: Log an ERROR level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def error(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("ERROR", message, source, metadata)
# [/DEF:error:Function]
# [DEF:progress:Function]
# @PURPOSE: Log a progress update with percentage.
# @PRE: percent is between 0 and 100.
# @POST: Log entry with progress metadata added.
# @PARAM: message (str) - Progress message.
# @PARAM: percent (float) - Progress percentage (0-100).
# @PARAM: source (Optional[str]) - Override source.
def progress(
self,
message: str,
percent: float,
source: Optional[str] = None
) -> None:
"""Log a progress update with percentage."""
metadata = {"progress": min(100, max(0, percent))}
self._log("INFO", message, source, metadata)
# [/DEF:progress:Function]
# [/DEF:TaskLogger:Class]
# [/DEF:TaskLoggerModule:Module]

View File

@@ -11,7 +11,7 @@
# [SECTION: IMPORTS]
import pandas as pd # type: ignore
import psycopg2 # type: ignore
from typing import Dict, List, Optional, Any
from typing import Dict, Optional, Any
from ..logger import logger as app_logger, belief_scope
# [/SECTION]

View File

@@ -19,7 +19,6 @@ from datetime import date, datetime
import shutil
import zlib
from dataclasses import dataclass
import yaml
from ..logger import logger as app_logger, belief_scope
# [/SECTION]

View File

@@ -42,6 +42,8 @@ def suggest_mappings(source_databases: List[Dict], target_databases: List[Dict],
name, score, index = match
if score >= threshold:
suggestions.append({
"source_db": s_db['database_name'],
"target_db": target_databases[index]['database_name'],
"source_db_uuid": s_db['uuid'],
"target_db_uuid": target_databases[index]['uuid'],
"confidence": score / 100.0

View File

@@ -118,14 +118,41 @@ class APIClient:
def _init_session(self) -> requests.Session:
with belief_scope("_init_session"):
session = requests.Session()
# Create a custom adapter that handles TLS issues
class TLSAdapter(HTTPAdapter):
def init_poolmanager(self, connections, maxsize, block=False):
from urllib3.poolmanager import PoolManager
import ssl
# Create an SSL context that ignores TLSv1 unrecognized name errors
ctx = ssl.create_default_context()
ctx.set_ciphers('HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!SRP:!CAMELLIA')
# Ignore TLSV1_UNRECOGNIZED_NAME errors by disabling hostname verification
# This is safe when verify_ssl is false (we're already not verifying the certificate)
ctx.check_hostname = False
self.poolmanager = PoolManager(
num_pools=connections,
maxsize=maxsize,
block=block,
ssl_context=ctx
)
retries = Retry(total=3, backoff_factor=0.5, status_forcelist=[500, 502, 503, 504])
adapter = HTTPAdapter(max_retries=retries)
adapter = TLSAdapter(max_retries=retries)
session.mount('http://', adapter)
session.mount('https://', adapter)
if not self.request_settings["verify_ssl"]:
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
app_logger.warning("[_init_session][State] SSL verification disabled.")
session.verify = self.request_settings["verify_ssl"]
# When verify_ssl is false, we should also disable hostname verification
session.verify = False
else:
session.verify = True
return session
# [/DEF:_init_session:Function]
@@ -177,7 +204,8 @@ class APIClient:
# @POST: Returns headers including auth tokens.
def headers(self) -> Dict[str, str]:
with belief_scope("headers"):
if not self._authenticated: self.authenticate()
if not self._authenticated:
self.authenticate()
return {
"Authorization": f"Bearer {self._tokens['access_token']}",
"X-CSRFToken": self._tokens.get("csrf_token", ""),
@@ -200,7 +228,8 @@ class APIClient:
with belief_scope("request"):
full_url = f"{self.base_url}{endpoint}"
_headers = self.headers.copy()
if headers: _headers.update(headers)
if headers:
_headers.update(headers)
try:
response = self.session.request(method, full_url, headers=_headers, **kwargs)
@@ -223,9 +252,12 @@ class APIClient:
status_code = e.response.status_code
if status_code == 502 or status_code == 503 or status_code == 504:
raise NetworkError(f"Environment unavailable (Status {status_code})", status_code=status_code) from e
if status_code == 404: raise DashboardNotFoundError(endpoint) from e
if status_code == 403: raise PermissionDeniedError() from e
if status_code == 401: raise AuthenticationError() from e
if status_code == 404:
raise DashboardNotFoundError(endpoint) from e
if status_code == 403:
raise PermissionDeniedError() from e
if status_code == 401:
raise AuthenticationError() from e
raise SupersetAPIError(f"API Error {status_code}: {e.response.text}") from e
# [/DEF:_handle_http_error:Function]
@@ -237,9 +269,12 @@ class APIClient:
# @POST: Raises a NetworkError.
def _handle_network_error(self, e: requests.exceptions.RequestException, url: str):
with belief_scope("_handle_network_error"):
if isinstance(e, requests.exceptions.Timeout): msg = "Request timeout"
elif isinstance(e, requests.exceptions.ConnectionError): msg = "Connection error"
else: msg = f"Unknown network error: {e}"
if isinstance(e, requests.exceptions.Timeout):
msg = "Request timeout"
elif isinstance(e, requests.exceptions.ConnectionError):
msg = "Connection error"
else:
msg = f"Unknown network error: {e}"
raise NetworkError(msg, url=url) from e
# [/DEF:_handle_network_error:Function]
@@ -256,7 +291,9 @@ class APIClient:
def upload_file(self, endpoint: str, file_info: Dict[str, Any], extra_data: Optional[Dict] = None, timeout: Optional[int] = None) -> Dict:
with belief_scope("upload_file"):
full_url = f"{self.base_url}{endpoint}"
_headers = self.headers.copy(); _headers.pop('Content-Type', None)
_headers = self.headers.copy()
_headers.pop('Content-Type', None)
file_obj, file_name, form_field = file_info.get("file_obj"), file_info.get("file_name"), file_info.get("form_field", "file")
@@ -318,20 +355,40 @@ class APIClient:
# @PURPOSE: Автоматически собирает данные со всех страниц пагинированного эндпоинта.
# @PARAM: endpoint (str) - Эндпоинт.
# @PARAM: pagination_options (Dict[str, Any]) - Опции пагинации.
# @PRE: pagination_options must contain 'base_query', 'total_count', 'results_field'.
# @PRE: pagination_options must contain 'base_query', 'results_field'. 'total_count' is optional.
# @POST: Returns all items across all pages.
# @RETURN: List[Any] - Список данных.
def fetch_paginated_data(self, endpoint: str, pagination_options: Dict[str, Any]) -> List[Any]:
with belief_scope("fetch_paginated_data"):
base_query, total_count = pagination_options["base_query"], pagination_options["total_count"]
results_field, page_size = pagination_options["results_field"], base_query.get('page_size')
assert page_size and page_size > 0, "'page_size' must be a positive number."
base_query = pagination_options["base_query"]
total_count = pagination_options.get("total_count")
results_field = pagination_options["results_field"]
count_field = pagination_options.get("count_field", "count")
page_size = base_query.get('page_size', 1000)
assert page_size > 0, "'page_size' must be a positive number."
results = []
for page in range((total_count + page_size - 1) // page_size):
page = 0
# Fetch first page to get data and total count if not provided
query = {**base_query, 'page': page}
response_json = cast(Dict[str, Any], self.request("GET", endpoint, params={"q": json.dumps(query)}))
first_page_results = response_json.get(results_field, [])
results.extend(first_page_results)
if total_count is None:
total_count = response_json.get(count_field, len(first_page_results))
app_logger.debug(f"[fetch_paginated_data][State] Total count resolved from first page: {total_count}")
# Fetch remaining pages
total_pages = (total_count + page_size - 1) // page_size
for page in range(1, total_pages):
query = {**base_query, 'page': page}
response_json = cast(Dict[str, Any], self.request("GET", endpoint, params={"q": json.dumps(query)}))
results.extend(response_json.get(results_field, []))
return results
# [/DEF:fetch_paginated_data:Function]

View File

@@ -1,11 +1,10 @@
# [DEF:Dependencies:Module]
# @SEMANTICS: dependency, injection, singleton, factory, auth, jwt
# @PURPOSE: Manages the creation and provision of shared application dependencies, such as the PluginLoader and TaskManager, to avoid circular imports.
# @PURPOSE: Manages creation and provision of shared application dependencies, such as PluginLoader and TaskManager, to avoid circular imports.
# @LAYER: Core
# @RELATION: Used by the main app and API routers to get access to shared instances.
# @RELATION: Used by main app and API routers to get access to shared instances.
from pathlib import Path
from typing import Optional
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from jose import JWTError
@@ -13,8 +12,10 @@ from .core.plugin_loader import PluginLoader
from .core.task_manager import TaskManager
from .core.config_manager import ConfigManager
from .core.scheduler import SchedulerService
from .services.resource_service import ResourceService
from .services.mapping_service import MappingService
from .core.database import init_db, get_auth_db
from .core.logger import logger, belief_scope
from .core.logger import logger
from .core.auth.jwt import decode_token
from .core.auth.repository import AuthRepository
from .models.auth import User
@@ -29,12 +30,12 @@ config_manager = ConfigManager(config_path=str(config_path))
init_db()
# [DEF:get_config_manager:Function]
# @PURPOSE: Dependency injector for the ConfigManager.
# @PURPOSE: Dependency injector for ConfigManager.
# @PRE: Global config_manager must be initialized.
# @POST: Returns shared ConfigManager instance.
# @RETURN: ConfigManager - The shared config manager instance.
def get_config_manager() -> ConfigManager:
"""Dependency injector for the ConfigManager."""
"""Dependency injector for ConfigManager."""
return config_manager
# [/DEF:get_config_manager:Function]
@@ -50,45 +51,68 @@ logger.info("TaskManager initialized")
scheduler_service = SchedulerService(task_manager, config_manager)
logger.info("SchedulerService initialized")
resource_service = ResourceService()
logger.info("ResourceService initialized")
# [DEF:get_plugin_loader:Function]
# @PURPOSE: Dependency injector for the PluginLoader.
# @PURPOSE: Dependency injector for PluginLoader.
# @PRE: Global plugin_loader must be initialized.
# @POST: Returns shared PluginLoader instance.
# @RETURN: PluginLoader - The shared plugin loader instance.
def get_plugin_loader() -> PluginLoader:
"""Dependency injector for the PluginLoader."""
"""Dependency injector for PluginLoader."""
return plugin_loader
# [/DEF:get_plugin_loader:Function]
# [DEF:get_task_manager:Function]
# @PURPOSE: Dependency injector for the TaskManager.
# @PURPOSE: Dependency injector for TaskManager.
# @PRE: Global task_manager must be initialized.
# @POST: Returns shared TaskManager instance.
# @RETURN: TaskManager - The shared task manager instance.
def get_task_manager() -> TaskManager:
"""Dependency injector for the TaskManager."""
"""Dependency injector for TaskManager."""
return task_manager
# [/DEF:get_task_manager:Function]
# [DEF:get_scheduler_service:Function]
# @PURPOSE: Dependency injector for the SchedulerService.
# @PURPOSE: Dependency injector for SchedulerService.
# @PRE: Global scheduler_service must be initialized.
# @POST: Returns shared SchedulerService instance.
# @RETURN: SchedulerService - The shared scheduler service instance.
def get_scheduler_service() -> SchedulerService:
"""Dependency injector for the SchedulerService."""
"""Dependency injector for SchedulerService."""
return scheduler_service
# [/DEF:get_scheduler_service:Function]
# [DEF:get_resource_service:Function]
# @PURPOSE: Dependency injector for ResourceService.
# @PRE: Global resource_service must be initialized.
# @POST: Returns shared ResourceService instance.
# @RETURN: ResourceService - The shared resource service instance.
def get_resource_service() -> ResourceService:
"""Dependency injector for ResourceService."""
return resource_service
# [/DEF:get_resource_service:Function]
# [DEF:get_mapping_service:Function]
# @PURPOSE: Dependency injector for MappingService.
# @PRE: Global config_manager must be initialized.
# @POST: Returns new MappingService instance.
# @RETURN: MappingService - A new mapping service instance.
def get_mapping_service() -> MappingService:
"""Dependency injector for MappingService."""
return MappingService(config_manager)
# [/DEF:get_mapping_service:Function]
# [DEF:oauth2_scheme:Variable]
# @PURPOSE: OAuth2 password bearer scheme for token extraction.
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/auth/login")
# [/DEF:oauth2_scheme:Variable]
# [DEF:get_current_user:Function]
# @PURPOSE: Dependency for retrieving the currently authenticated user from a JWT.
# @PURPOSE: Dependency for retrieving currently authenticated user from a JWT.
# @PRE: JWT token provided in Authorization header.
# @POST: Returns the User object if token is valid.
# @POST: Returns User object if token is valid.
# @THROW: HTTPException 401 if token is invalid or user not found.
# @PARAM: token (str) - Extracted JWT token.
# @PARAM: db (Session) - Auth database session.
@@ -144,4 +168,4 @@ def has_permission(resource: str, action: str):
return permission_checker
# [/DEF:has_permission:Function]
# [/DEF:Dependencies:Module]
# [/DEF:Dependencies:Module]

View File

@@ -0,0 +1,36 @@
# [DEF:test_models:Module]
# @TIER: TRIVIAL
# @PURPOSE: Unit tests for data models
# @LAYER: Domain
# @RELATION: VERIFIES -> src.models
import sys
from pathlib import Path
# Add src to path
sys.path.append(str(Path(__file__).parent.parent.parent.parent / "src"))
from src.core.config_models import Environment
from src.core.logger import belief_scope
# [DEF:test_environment_model:Function]
# @PURPOSE: Tests that Environment model correctly stores values.
# @PRE: Environment class is available.
# @POST: Values are verified.
def test_environment_model():
with belief_scope("test_environment_model"):
env = Environment(
id="test-id",
name="test-env",
url="http://localhost:8088/api/v1",
username="admin",
password="password"
)
assert env.id == "test-id"
assert env.name == "test-env"
assert env.url == "http://localhost:8088/api/v1"
# [/DEF:test_environment_model:Function]
# [/DEF:test_models:Module]

View File

@@ -11,7 +11,7 @@
# [SECTION: IMPORTS]
import uuid
from datetime import datetime
from sqlalchemy import Column, String, Boolean, DateTime, ForeignKey, Table, Enum
from sqlalchemy import Column, String, Boolean, DateTime, ForeignKey, Table
from sqlalchemy.orm import relationship
from .mapping import Base
# [/SECTION]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.models.connection:Module]
#
# @TIER: TRIVIAL
# @SEMANTICS: database, connection, configuration, sqlalchemy, sqlite
# @PURPOSE: Defines the database schema for external database connection configurations.
# @LAYER: Domain
@@ -15,6 +16,7 @@ import uuid
# [/SECTION]
# [DEF:ConnectionConfig:Class]
# @TIER: TRIVIAL
# @PURPOSE: Stores credentials for external databases used for column mapping.
class ConnectionConfig(Base):
__tablename__ = "connection_configs"

View File

@@ -1,4 +1,5 @@
# [DEF:GitModels:Module]
# @TIER: TRIVIAL
# @SEMANTICS: git, models, sqlalchemy, database, schema
# @PURPOSE: Git-specific SQLAlchemy models for configuration and repository tracking.
# @LAYER: Model
@@ -7,7 +8,6 @@
import enum
from datetime import datetime
from sqlalchemy import Column, String, Integer, DateTime, Enum, ForeignKey, Boolean
from sqlalchemy.dialects.postgresql import UUID
import uuid
from src.core.database import Base
@@ -26,11 +26,10 @@ class SyncStatus(str, enum.Enum):
DIRTY = "DIRTY"
CONFLICT = "CONFLICT"
# [DEF:GitServerConfig:Class]
# @TIER: TRIVIAL
# @PURPOSE: Configuration for a Git server connection.
class GitServerConfig(Base):
"""
[DEF:GitServerConfig:Class]
Configuration for a Git server connection.
"""
__tablename__ = "git_server_configs"
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
@@ -41,12 +40,12 @@ class GitServerConfig(Base):
default_repository = Column(String(255), nullable=True)
status = Column(Enum(GitStatus), default=GitStatus.UNKNOWN)
last_validated = Column(DateTime, default=datetime.utcnow)
# [/DEF:GitServerConfig:Class]
# [DEF:GitRepository:Class]
# @TIER: TRIVIAL
# @PURPOSE: Tracking for a local Git repository linked to a dashboard.
class GitRepository(Base):
"""
[DEF:GitRepository:Class]
Tracking for a local Git repository linked to a dashboard.
"""
__tablename__ = "git_repositories"
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
@@ -56,12 +55,12 @@ class GitRepository(Base):
local_path = Column(String(255), nullable=False)
current_branch = Column(String(255), default="main")
sync_status = Column(Enum(SyncStatus), default=SyncStatus.CLEAN)
# [/DEF:GitRepository:Class]
# [DEF:DeploymentEnvironment:Class]
# @TIER: TRIVIAL
# @PURPOSE: Target Superset environments for dashboard deployment.
class DeploymentEnvironment(Base):
"""
[DEF:DeploymentEnvironment:Class]
Target Superset environments for dashboard deployment.
"""
__tablename__ = "deployment_environments"
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
@@ -69,5 +68,6 @@ class DeploymentEnvironment(Base):
superset_url = Column(String(255), nullable=False)
superset_token = Column(String(255), nullable=False)
is_active = Column(Boolean, default=True)
# [/DEF:DeploymentEnvironment:Class]
# [/DEF:GitModels:Module]
# [/DEF:GitModels:Module]

View File

@@ -5,7 +5,7 @@
# @LAYER: Domain
# @RELATION: INHERITS_FROM -> backend.src.models.mapping.Base
from sqlalchemy import Column, String, Boolean, DateTime, JSON, Enum, Text
from sqlalchemy import Column, String, Boolean, DateTime, JSON, Text
from datetime import datetime
import uuid
from .mapping import Base

View File

@@ -59,6 +59,7 @@ class DatabaseMapping(Base):
# [/DEF:DatabaseMapping:Class]
# [DEF:MigrationJob:Class]
# @TIER: TRIVIAL
# @PURPOSE: Represents a single migration execution job.
class MigrationJob(Base):
__tablename__ = "migration_jobs"

View File

@@ -1,9 +1,16 @@
# [DEF:backend.src.models.storage:Module]
# @TIER: TRIVIAL
# @SEMANTICS: storage, file, model, pydantic
# @PURPOSE: Data models for the storage system.
# @LAYER: Domain
from datetime import datetime
from enum import Enum
from typing import Optional
from pydantic import BaseModel, Field
# [DEF:FileCategory:Class]
# @TIER: TRIVIAL
# @PURPOSE: Enumeration of supported file categories in the storage system.
class FileCategory(str, Enum):
BACKUP = "backups"
@@ -11,6 +18,7 @@ class FileCategory(str, Enum):
# [/DEF:FileCategory:Class]
# [DEF:StorageConfig:Class]
# @TIER: TRIVIAL
# @PURPOSE: Configuration model for the storage system, defining paths and naming patterns.
class StorageConfig(BaseModel):
root_path: str = Field(default="backups", description="Absolute path to the storage root directory.")
@@ -20,6 +28,7 @@ class StorageConfig(BaseModel):
# [/DEF:StorageConfig:Class]
# [DEF:StoredFile:Class]
# @TIER: TRIVIAL
# @PURPOSE: Data model representing metadata for a file stored in the system.
class StoredFile(BaseModel):
name: str = Field(..., description="Name of the file (including extension).")
@@ -28,4 +37,6 @@ class StoredFile(BaseModel):
created_at: datetime = Field(..., description="Creation timestamp.")
category: FileCategory = Field(..., description="Category of the file.")
mime_type: Optional[str] = Field(None, description="MIME type of the file.")
# [/DEF:StoredFile:Class]
# [/DEF:StoredFile:Class]
# [/DEF:backend.src.models.storage:Module]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.models.task:Module]
#
# @TIER: TRIVIAL
# @SEMANTICS: database, task, record, sqlalchemy, sqlite
# @PURPOSE: Defines the database schema for task execution records.
# @LAYER: Domain
@@ -8,13 +9,14 @@
# @INVARIANT: All primary keys are UUID strings.
# [SECTION: IMPORTS]
from sqlalchemy import Column, String, DateTime, JSON, ForeignKey
from sqlalchemy import Column, String, DateTime, JSON, ForeignKey, Text, Integer, Index
from sqlalchemy.sql import func
from .mapping import Base
import uuid
# [/SECTION]
# [DEF:TaskRecord:Class]
# @TIER: TRIVIAL
# @PURPOSE: Represents a persistent record of a task execution.
class TaskRecord(Base):
__tablename__ = "task_records"
@@ -25,11 +27,35 @@ class TaskRecord(Base):
environment_id = Column(String, ForeignKey("environments.id"), nullable=True)
started_at = Column(DateTime(timezone=True), nullable=True)
finished_at = Column(DateTime(timezone=True), nullable=True)
logs = Column(JSON, nullable=True) # Store structured logs as JSON
logs = Column(JSON, nullable=True) # Store structured logs as JSON (legacy, kept for backward compatibility)
error = Column(String, nullable=True)
result = Column(JSON, nullable=True)
created_at = Column(DateTime(timezone=True), server_default=func.now())
params = Column(JSON, nullable=True)
# [/DEF:TaskRecord:Class]
# [DEF:TaskLogRecord:Class]
# @PURPOSE: Represents a single persistent log entry for a task.
# @TIER: CRITICAL
# @RELATION: DEPENDS_ON -> TaskRecord
# @INVARIANT: Each log entry belongs to exactly one task.
class TaskLogRecord(Base):
__tablename__ = "task_logs"
id = Column(Integer, primary_key=True, autoincrement=True)
task_id = Column(String, ForeignKey("task_records.id", ondelete="CASCADE"), nullable=False, index=True)
timestamp = Column(DateTime(timezone=True), nullable=False, index=True)
level = Column(String(16), nullable=False) # INFO, WARNING, ERROR, DEBUG
source = Column(String(64), nullable=False, default="system") # plugin, superset_api, git, etc.
message = Column(Text, nullable=False)
metadata_json = Column(Text, nullable=True) # JSON string for additional metadata
# Composite indexes for efficient filtering
__table_args__ = (
Index('ix_task_logs_task_timestamp', 'task_id', 'timestamp'),
Index('ix_task_logs_task_level', 'task_id', 'level'),
Index('ix_task_logs_task_source', 'task_id', 'source'),
)
# [/DEF:TaskLogRecord:Class]
# [/DEF:backend.src.models.task:Module]

View File

@@ -5,13 +5,14 @@
# @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils
# @RELATION: USES -> TaskContext
from typing import Dict, Any
from typing import Dict, Any, Optional
from pathlib import Path
from requests.exceptions import RequestException
from ..core.plugin_base import PluginBase
from ..core.logger import belief_scope
from ..core.logger import belief_scope, logger as app_logger
from ..core.superset_client import SupersetClient
from ..core.utils.network import SupersetAPIError
from ..core.utils.fileio import (
@@ -23,6 +24,7 @@ from ..core.utils.fileio import (
RetentionPolicy
)
from ..dependencies import get_config_manager
from ..core.task_manager.context import TaskContext
# [DEF:BackupPlugin:Class]
# @PURPOSE: Implementation of the backup plugin logic.
@@ -93,7 +95,7 @@ class BackupPlugin(PluginBase):
with belief_scope("get_schema"):
config_manager = get_config_manager()
envs = [e.name for e in config_manager.get_environments()]
default_path = config_manager.get_config().settings.storage.root_path
config_manager.get_config().settings.storage.root_path
return {
"type": "object",
@@ -110,14 +112,22 @@ class BackupPlugin(PluginBase):
# [/DEF:get_schema:Function]
# [DEF:execute:Function]
# @PURPOSE: Executes the dashboard backup logic.
# @PARAM: params (Dict[str, Any]) - Backup parameters (env, backup_path).
# @PURPOSE: Executes the dashboard backup logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Backup parameters (env, backup_path, dashboard_ids).
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Target environment must be configured. params must be a dictionary.
# @POST: All dashboards are exported and archived.
async def execute(self, params: Dict[str, Any]):
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("execute"):
config_manager = get_config_manager()
env_id = params.get("environment_id")
# Support both parameter names: environment_id (for task creation) and env (for direct calls)
env_id = params.get("environment_id") or params.get("env")
dashboard_ids = params.get("dashboard_ids") or params.get("dashboards")
# Log the incoming parameters for debugging
log = context.logger if context else app_logger
log.info(f"Backup parameters received: env_id={env_id}, dashboard_ids={dashboard_ids}")
# Resolve environment name if environment_id is provided
if env_id:
@@ -128,13 +138,21 @@ class BackupPlugin(PluginBase):
env = params.get("env")
if not env:
raise KeyError("env")
log.info(f"Backup started for environment: {env}, selected dashboards: {dashboard_ids}")
storage_settings = config_manager.get_config().settings.storage
# Use 'backups' subfolder within the storage root
backup_path = Path(storage_settings.root_path) / "backups"
from ..core.logger import logger as app_logger
app_logger.info(f"[BackupPlugin][Entry] Starting backup for {env}.")
# Use TaskContext logger if available, otherwise fall back to app_logger
log = context.logger if context else app_logger
# Create sub-loggers for different components
superset_log = log.with_source("superset_api") if context else log
storage_log = log.with_source("storage") if context else log
log.info(f"Starting backup for environment: {env}")
try:
config_manager = get_config_manager()
@@ -147,25 +165,43 @@ class BackupPlugin(PluginBase):
client = SupersetClient(env_config)
dashboard_count, dashboard_meta = client.get_dashboards()
app_logger.info(f"[BackupPlugin][Progress] Found {dashboard_count} dashboards to export in {env}.")
# Get all dashboards
all_dashboard_count, all_dashboard_meta = client.get_dashboards()
superset_log.info(f"Found {all_dashboard_count} total dashboards in environment")
# Filter dashboards if specific IDs are provided
if dashboard_ids:
dashboard_ids_int = [int(did) for did in dashboard_ids]
dashboard_meta = [db for db in all_dashboard_meta if db.get('id') in dashboard_ids_int]
dashboard_count = len(dashboard_meta)
superset_log.info(f"Filtered to {dashboard_count} selected dashboards: {dashboard_ids_int}")
else:
dashboard_count = all_dashboard_count
superset_log.info("No dashboard filter applied - backing up all dashboards")
dashboard_meta = all_dashboard_meta
if dashboard_count == 0:
app_logger.info("[BackupPlugin][Exit] No dashboards to back up.")
log.info("No dashboards to back up")
return
for db in dashboard_meta:
total = len(dashboard_meta)
for idx, db in enumerate(dashboard_meta, 1):
dashboard_id = db.get('id')
dashboard_title = db.get('dashboard_title', 'Unknown Dashboard')
if not dashboard_id:
continue
# Report progress
progress_pct = (idx / total) * 100
log.progress(f"Backing up dashboard: {dashboard_title}", percent=progress_pct)
try:
dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}")
dashboard_dir = backup_path / env.upper() / dashboard_base_dir_name
dashboard_dir.mkdir(parents=True, exist_ok=True)
zip_content, filename = client.export_dashboard(dashboard_id)
superset_log.debug(f"Exported dashboard: {dashboard_title}")
save_and_unpack_dashboard(
zip_content=zip_content,
@@ -175,18 +211,19 @@ class BackupPlugin(PluginBase):
)
archive_exports(str(dashboard_dir), policy=RetentionPolicy())
storage_log.debug(f"Archived dashboard: {dashboard_title}")
except (SupersetAPIError, RequestException, IOError, OSError) as db_error:
app_logger.error(f"[BackupPlugin][Failure] Failed to export dashboard {dashboard_title} (ID: {dashboard_id}): {db_error}", exc_info=True)
log.error(f"Failed to export dashboard {dashboard_title} (ID: {dashboard_id}): {db_error}")
continue
consolidate_archive_folders(backup_path / env.upper())
remove_empty_directories(str(backup_path / env.upper()))
app_logger.info(f"[BackupPlugin][CoherenceCheck:Passed] Backup logic completed for {env}.")
log.info(f"Backup completed successfully for {env}")
except (RequestException, IOError, KeyError) as e:
app_logger.critical(f"[BackupPlugin][Failure] Fatal error during backup for {env}: {e}", exc_info=True)
log.error(f"Fatal error during backup for {env}: {e}")
raise e
# [/DEF:execute:Function]
# [/DEF:BackupPlugin:Class]

View File

@@ -3,6 +3,7 @@
# @PURPOSE: Implements a plugin for system diagnostics and debugging Superset API responses.
# @LAYER: Plugins
# @RELATION: Inherits from PluginBase. Uses SupersetClient from core.
# @RELATION: USES -> TaskContext
# @CONSTRAINT: Must use belief_scope for logging.
# [SECTION: IMPORTS]
@@ -10,6 +11,7 @@ from typing import Dict, Any, Optional
from ..core.plugin_base import PluginBase
from ..core.superset_client import SupersetClient
from ..core.logger import logger, belief_scope
from ..core.task_manager.context import TaskContext
# [/SECTION]
# [DEF:DebugPlugin:Class]
@@ -114,20 +116,29 @@ class DebugPlugin(PluginBase):
# [/DEF:get_schema:Function]
# [DEF:execute:Function]
# @PURPOSE: Executes the debug logic.
# @PURPOSE: Executes the debug logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Debug parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: action must be provided in params.
# @POST: Debug action is executed and results returned.
# @RETURN: Dict[str, Any] - Execution results.
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]:
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("execute"):
action = params.get("action")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
debug_log = log.with_source("debug") if context else log
superset_log = log.with_source("superset_api") if context else log
debug_log.info(f"Executing debug action: {action}")
if action == "test-db-api":
return await self._test_db_api(params)
return await self._test_db_api(params, superset_log)
elif action == "get-dataset-structure":
return await self._get_dataset_structure(params)
return await self._get_dataset_structure(params, superset_log)
else:
debug_log.error(f"Unknown action: {action}")
raise ValueError(f"Unknown action: {action}")
# [/DEF:execute:Function]
@@ -136,33 +147,37 @@ class DebugPlugin(PluginBase):
# @PRE: source_env and target_env params exist in params.
# @POST: Returns DB counts for both envs.
# @PARAM: params (Dict) - Plugin parameters.
# @PARAM: log - Logger instance for superset_api source.
# @RETURN: Dict - Comparison results.
async def _test_db_api(self, params: Dict[str, Any]) -> Dict[str, Any]:
async def _test_db_api(self, params: Dict[str, Any], log) -> Dict[str, Any]:
with belief_scope("_test_db_api"):
source_env_name = params.get("source_env")
target_env_name = params.get("target_env")
target_env_name = params.get("target_env")
if not source_env_name or not target_env_name:
raise ValueError("source_env and target_env are required for test-db-api")
if not source_env_name or not target_env_name:
raise ValueError("source_env and target_env are required for test-db-api")
from ..dependencies import get_config_manager
config_manager = get_config_manager()
from ..dependencies import get_config_manager
config_manager = get_config_manager()
results = {}
for name in [source_env_name, target_env_name]:
env_config = config_manager.get_environment(name)
if not env_config:
raise ValueError(f"Environment '{name}' not found.")
results = {}
for name in [source_env_name, target_env_name]:
log.info(f"Testing database API for environment: {name}")
env_config = config_manager.get_environment(name)
if not env_config:
log.error(f"Environment '{name}' not found.")
raise ValueError(f"Environment '{name}' not found.")
client = SupersetClient(env_config)
client.authenticate()
count, dbs = client.get_databases()
results[name] = {
"count": count,
"databases": dbs
}
client = SupersetClient(env_config)
client.authenticate()
count, dbs = client.get_databases()
log.debug(f"Found {count} databases in {name}")
results[name] = {
"count": count,
"databases": dbs
}
return results
return results
# [/DEF:_test_db_api:Function]
# [DEF:_get_dataset_structure:Function]
@@ -170,26 +185,31 @@ class DebugPlugin(PluginBase):
# @PRE: env and dataset_id params exist in params.
# @POST: Returns dataset JSON structure.
# @PARAM: params (Dict) - Plugin parameters.
# @PARAM: log - Logger instance for superset_api source.
# @RETURN: Dict - Dataset structure.
async def _get_dataset_structure(self, params: Dict[str, Any]) -> Dict[str, Any]:
async def _get_dataset_structure(self, params: Dict[str, Any], log) -> Dict[str, Any]:
with belief_scope("_get_dataset_structure"):
env_name = params.get("env")
dataset_id = params.get("dataset_id")
dataset_id = params.get("dataset_id")
if not env_name or dataset_id is None:
raise ValueError("env and dataset_id are required for get-dataset-structure")
if not env_name or dataset_id is None:
raise ValueError("env and dataset_id are required for get-dataset-structure")
from ..dependencies import get_config_manager
config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name)
if not env_config:
raise ValueError(f"Environment '{env_name}' not found.")
log.info(f"Fetching structure for dataset {dataset_id} in {env_name}")
client = SupersetClient(env_config)
client.authenticate()
from ..dependencies import get_config_manager
config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name)
if not env_config:
log.error(f"Environment '{env_name}' not found.")
raise ValueError(f"Environment '{env_name}' not found.")
client = SupersetClient(env_config)
client.authenticate()
dataset_response = client.get_dataset(dataset_id)
return dataset_response.get('result') or {}
dataset_response = client.get_dataset(dataset_id)
log.debug(f"Retrieved dataset structure for {dataset_id}")
return dataset_response.get('result') or {}
# [/DEF:_get_dataset_structure:Function]
# [/DEF:DebugPlugin:Class]

View File

@@ -5,10 +5,9 @@
# @LAYER: Domain
# @RELATION: DEPENDS_ON -> backend.src.plugins.llm_analysis.service.LLMClient
from typing import List, Optional
from typing import List
from tenacity import retry, stop_after_attempt, wait_exponential
from ..llm_analysis.service import LLMClient
from ..llm_analysis.models import LLMProviderType
from ...core.logger import belief_scope, logger
# [DEF:GitLLMExtension:Class]
@@ -61,6 +60,7 @@ class GitLLMExtension:
return "Update dashboard configurations (LLM generation failed)"
return response.choices[0].message.content.strip()
# [/DEF:suggest_commit_message:Function]
# [/DEF:GitLLMExtension:Class]
# [/DEF:backend/src/plugins/git/llm_extension:Module]

View File

@@ -7,6 +7,7 @@
# @RELATION: USES -> src.services.git_service.GitService
# @RELATION: USES -> src.core.superset_client.SupersetClient
# @RELATION: USES -> src.core.config_manager.ConfigManager
# @RELATION: USES -> TaskContext
#
# @INVARIANT: Все операции с Git должны выполняться через GitService.
# @CONSTRAINT: Плагин работает только с распакованными YAML-экспортами Superset.
@@ -20,9 +21,10 @@ from pathlib import Path
from typing import Dict, Any, Optional
from src.core.plugin_base import PluginBase
from src.services.git_service import GitService
from src.core.logger import logger, belief_scope
from src.core.logger import logger as app_logger, belief_scope
from src.core.config_manager import ConfigManager
from src.core.superset_client import SupersetClient
from src.core.task_manager.context import TaskContext
# [/SECTION]
# [DEF:GitPlugin:Class]
@@ -35,7 +37,7 @@ class GitPlugin(PluginBase):
# @POST: Инициализированы git_service и config_manager.
def __init__(self):
with belief_scope("GitPlugin.__init__"):
logger.info("[GitPlugin.__init__][Entry] Initializing GitPlugin.")
app_logger.info("Initializing GitPlugin.")
self.git_service = GitService()
# Robust config path resolution:
@@ -50,13 +52,13 @@ class GitPlugin(PluginBase):
try:
from src.dependencies import config_manager
self.config_manager = config_manager
logger.info("[GitPlugin.__init__][Exit] GitPlugin initialized using shared config_manager.")
app_logger.info("GitPlugin initialized using shared config_manager.")
return
except:
except Exception:
config_path = "config.json"
self.config_manager = ConfigManager(config_path)
logger.info(f"[GitPlugin.__init__][Exit] GitPlugin initialized with {config_path}")
app_logger.info(f"GitPlugin initialized with {config_path}")
# [/DEF:__init__:Function]
@property
@@ -133,36 +135,44 @@ class GitPlugin(PluginBase):
# @POST: Плагин готов к выполнению задач.
async def initialize(self):
with belief_scope("GitPlugin.initialize"):
logger.info("[GitPlugin.initialize][Action] Initializing Git Integration Plugin logic.")
app_logger.info("[GitPlugin.initialize][Action] Initializing Git Integration Plugin logic.")
# [DEF:execute:Function]
# @PURPOSE: Основной метод выполнения задач плагина.
# @PURPOSE: Основной метод выполнения задач плагина с поддержкой TaskContext.
# @PRE: task_data содержит 'operation' и 'dashboard_id'.
# @POST: Возвращает результат выполнения операции.
# @PARAM: task_data (Dict[str, Any]) - Данные задачи.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @RETURN: Dict[str, Any] - Статус и сообщение.
# @RELATION: CALLS -> self._handle_sync
# @RELATION: CALLS -> self._handle_deploy
async def execute(self, task_data: Dict[str, Any]) -> Dict[str, Any]:
async def execute(self, task_data: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("GitPlugin.execute"):
operation = task_data.get("operation")
dashboard_id = task_data.get("dashboard_id")
logger.info(f"[GitPlugin.execute][Entry] Executing operation: {operation} for dashboard {dashboard_id}")
# Use TaskContext logger if available, otherwise fall back to app_logger
log = context.logger if context else app_logger
# Create sub-loggers for different components
git_log = log.with_source("git") if context else log
superset_log = log.with_source("superset_api") if context else log
log.info(f"Executing operation: {operation} for dashboard {dashboard_id}")
if operation == "sync":
source_env_id = task_data.get("source_env_id")
result = await self._handle_sync(dashboard_id, source_env_id)
result = await self._handle_sync(dashboard_id, source_env_id, log, git_log, superset_log)
elif operation == "deploy":
env_id = task_data.get("environment_id")
result = await self._handle_deploy(dashboard_id, env_id)
result = await self._handle_deploy(dashboard_id, env_id, log, git_log, superset_log)
elif operation == "history":
result = {"status": "success", "message": "History available via API"}
else:
logger.error(f"[GitPlugin.execute][Coherence:Failed] Unknown operation: {operation}")
log.error(f"Unknown operation: {operation}")
raise ValueError(f"Unknown operation: {operation}")
logger.info(f"[GitPlugin.execute][Exit] Operation {operation} completed.")
log.info(f"Operation {operation} completed.")
return result
# [/DEF:execute:Function]
@@ -176,13 +186,13 @@ class GitPlugin(PluginBase):
# @SIDE_EFFECT: Изменяет файлы в локальной рабочей директории репозитория.
# @RELATION: CALLS -> src.services.git_service.GitService.get_repo
# @RELATION: CALLS -> src.core.superset_client.SupersetClient.export_dashboard
async def _handle_sync(self, dashboard_id: int, source_env_id: Optional[str] = None) -> Dict[str, str]:
async def _handle_sync(self, dashboard_id: int, source_env_id: Optional[str] = None, log=None, git_log=None, superset_log=None) -> Dict[str, str]:
with belief_scope("GitPlugin._handle_sync"):
try:
# 1. Получение репозитория
repo = self.git_service.get_repo(dashboard_id)
repo_path = Path(repo.working_dir)
logger.info(f"[_handle_sync][Action] Target repo path: {repo_path}")
git_log.info(f"Target repo path: {repo_path}")
# 2. Настройка клиента Superset
env = self._get_env(source_env_id)
@@ -190,11 +200,11 @@ class GitPlugin(PluginBase):
client.authenticate()
# 3. Экспорт дашборда
logger.info(f"[_handle_sync][Action] Exporting dashboard {dashboard_id} from {env.name}")
superset_log.info(f"Exporting dashboard {dashboard_id} from {env.name}")
zip_bytes, _ = client.export_dashboard(dashboard_id)
# 4. Распаковка с выравниванием структуры (flattening)
logger.info(f"[_handle_sync][Action] Unpacking export to {repo_path}")
git_log.info(f"Unpacking export to {repo_path}")
# Список папок/файлов, которые мы ожидаем от Superset
managed_dirs = ["dashboards", "charts", "datasets", "databases"]
@@ -218,7 +228,7 @@ class GitPlugin(PluginBase):
raise ValueError("Export ZIP is empty")
root_folder = namelist[0].split('/')[0]
logger.info(f"[_handle_sync][Action] Detected root folder in ZIP: {root_folder}")
git_log.info(f"Detected root folder in ZIP: {root_folder}")
for member in zf.infolist():
if member.filename.startswith(root_folder + "/") and len(member.filename) > len(root_folder) + 1:
@@ -236,15 +246,15 @@ class GitPlugin(PluginBase):
# 5. Автоматический staging изменений (не коммит, чтобы юзер мог проверить diff)
try:
repo.git.add(A=True)
logger.info(f"[_handle_sync][Action] Changes staged in git")
app_logger.info("[_handle_sync][Action] Changes staged in git")
except Exception as ge:
logger.warning(f"[_handle_sync][Action] Failed to stage changes: {ge}")
app_logger.warning(f"[_handle_sync][Action] Failed to stage changes: {ge}")
logger.info(f"[_handle_sync][Coherence:OK] Dashboard {dashboard_id} synced successfully.")
app_logger.info(f"[_handle_sync][Coherence:OK] Dashboard {dashboard_id} synced successfully.")
return {"status": "success", "message": "Dashboard synced and flattened in local repository"}
except Exception as e:
logger.error(f"[_handle_sync][Coherence:Failed] Sync failed: {e}")
app_logger.error(f"[_handle_sync][Coherence:Failed] Sync failed: {e}")
raise
# [/DEF:_handle_sync:Function]
@@ -254,10 +264,13 @@ class GitPlugin(PluginBase):
# @POST: Дашборд импортирован в целевой Superset.
# @PARAM: dashboard_id (int) - ID дашборда.
# @PARAM: env_id (str) - ID целевого окружения.
# @PARAM: log - Main logger instance.
# @PARAM: git_log - Git-specific logger instance.
# @PARAM: superset_log - Superset API-specific logger instance.
# @RETURN: Dict[str, Any] - Результат деплоя.
# @SIDE_EFFECT: Создает и удаляет временный ZIP-файл.
# @RELATION: CALLS -> src.core.superset_client.SupersetClient.import_dashboard
async def _handle_deploy(self, dashboard_id: int, env_id: str) -> Dict[str, Any]:
async def _handle_deploy(self, dashboard_id: int, env_id: str, log=None, git_log=None, superset_log=None) -> Dict[str, Any]:
with belief_scope("GitPlugin._handle_deploy"):
try:
if not env_id:
@@ -268,7 +281,7 @@ class GitPlugin(PluginBase):
repo_path = Path(repo.working_dir)
# 2. Упаковка в ZIP
logger.info(f"[_handle_deploy][Action] Packing repository {repo_path} for deployment.")
git_log.info(f"Packing repository {repo_path} for deployment.")
zip_buffer = io.BytesIO()
# Superset expects a root directory in the ZIP (e.g., dashboard_export_20240101T000000/)
@@ -279,7 +292,8 @@ class GitPlugin(PluginBase):
if ".git" in dirs:
dirs.remove(".git")
for file in files:
if file == ".git" or file.endswith(".zip"): continue
if file == ".git" or file.endswith(".zip"):
continue
file_path = Path(root) / file
# Prepend the root directory name to the archive path
arcname = Path(root_dir_name) / file_path.relative_to(repo_path)
@@ -297,21 +311,21 @@ class GitPlugin(PluginBase):
# 4. Импорт
temp_zip_path = repo_path / f"deploy_{dashboard_id}.zip"
logger.info(f"[_handle_deploy][Action] Saving temporary zip to {temp_zip_path}")
git_log.info(f"Saving temporary zip to {temp_zip_path}")
with open(temp_zip_path, "wb") as f:
f.write(zip_buffer.getvalue())
try:
logger.info(f"[_handle_deploy][Action] Importing dashboard to {env.name}")
app_logger.info(f"[_handle_deploy][Action] Importing dashboard to {env.name}")
result = client.import_dashboard(temp_zip_path)
logger.info(f"[_handle_deploy][Coherence:OK] Deployment successful for dashboard {dashboard_id}.")
app_logger.info(f"[_handle_deploy][Coherence:OK] Deployment successful for dashboard {dashboard_id}.")
return {"status": "success", "message": f"Dashboard deployed to {env.name}", "details": result}
finally:
if temp_zip_path.exists():
os.remove(temp_zip_path)
except Exception as e:
logger.error(f"[_handle_deploy][Coherence:Failed] Deployment failed: {e}")
app_logger.error(f"[_handle_deploy][Coherence:Failed] Deployment failed: {e}")
raise
# [/DEF:_handle_deploy:Function]
@@ -323,13 +337,13 @@ class GitPlugin(PluginBase):
# @RETURN: Environment - Объект конфигурации окружения.
def _get_env(self, env_id: Optional[str] = None):
with belief_scope("GitPlugin._get_env"):
logger.info(f"[_get_env][Entry] Fetching environment for ID: {env_id}")
app_logger.info(f"[_get_env][Entry] Fetching environment for ID: {env_id}")
# Priority 1: ConfigManager (config.json)
if env_id:
env = self.config_manager.get_environment(env_id)
if env:
logger.info(f"[_get_env][Exit] Found environment by ID in ConfigManager: {env.name}")
app_logger.info(f"[_get_env][Exit] Found environment by ID in ConfigManager: {env.name}")
return env
# Priority 2: Database (DeploymentEnvironment)
@@ -342,12 +356,12 @@ class GitPlugin(PluginBase):
db_env = db.query(DeploymentEnvironment).filter(DeploymentEnvironment.id == env_id).first()
else:
# If no ID, try to find active or any environment in DB
db_env = db.query(DeploymentEnvironment).filter(DeploymentEnvironment.is_active == True).first()
db_env = db.query(DeploymentEnvironment).filter(DeploymentEnvironment.is_active).first()
if not db_env:
db_env = db.query(DeploymentEnvironment).first()
if db_env:
logger.info(f"[_get_env][Exit] Found environment in DB: {db_env.name}")
app_logger.info(f"[_get_env][Exit] Found environment in DB: {db_env.name}")
from src.core.config_models import Environment
# Use token as password for SupersetClient
return Environment(
@@ -369,14 +383,14 @@ class GitPlugin(PluginBase):
# but we have other envs, maybe it's one of them?
env = next((e for e in envs if e.id == env_id), None)
if env:
logger.info(f"[_get_env][Exit] Found environment {env_id} in ConfigManager list")
app_logger.info(f"[_get_env][Exit] Found environment {env_id} in ConfigManager list")
return env
if not env_id:
logger.info(f"[_get_env][Exit] Using first environment from ConfigManager: {envs[0].name}")
app_logger.info(f"[_get_env][Exit] Using first environment from ConfigManager: {envs[0].name}")
return envs[0]
logger.error(f"[_get_env][Coherence:Failed] No environments configured (searched config.json and DB). env_id={env_id}")
app_logger.error(f"[_get_env][Coherence:Failed] No environments configured (searched config.json and DB). env_id={env_id}")
raise ValueError("No environments configured. Please add a Superset Environment in Settings.")
# [/DEF:_get_env:Function]

View File

@@ -9,4 +9,6 @@ LLM Analysis Plugin for automated dashboard validation and dataset documentation
from .plugin import DashboardValidationPlugin, DocumentationPlugin
__all__ = ['DashboardValidationPlugin', 'DocumentationPlugin']
# [/DEF:backend/src/plugins/llm_analysis/__init__.py:Module]

View File

@@ -7,22 +7,22 @@
# @RELATION: CALLS -> backend.src.plugins.llm_analysis.service.ScreenshotService
# @RELATION: CALLS -> backend.src.plugins.llm_analysis.service.LLMClient
# @RELATION: CALLS -> backend.src.services.llm_provider.LLMProviderService
# @RELATION: USES -> TaskContext
# @INVARIANT: All LLM interactions must be executed as asynchronous tasks.
from typing import Dict, Any, Optional, List
from typing import Dict, Any, Optional
import os
import json
import logging
from datetime import datetime, timedelta
from ...core.plugin_base import PluginBase
from ...core.logger import belief_scope, logger
from ...core.database import SessionLocal
from ...core.config_manager import ConfigManager
from ...services.llm_provider import LLMProviderService
from ...core.superset_client import SupersetClient
from .service import ScreenshotService, LLMClient
from .models import LLMProviderType, ValidationStatus, ValidationResult, DetectedIssue
from ...models.llm import ValidationRecord
from ...core.task_manager.context import TaskContext
# [DEF:DashboardValidationPlugin:Class]
# @PURPOSE: Plugin for automated dashboard health analysis using LLMs.
@@ -56,28 +56,27 @@ class DashboardValidationPlugin(PluginBase):
}
# [DEF:DashboardValidationPlugin.execute:Function]
# @PURPOSE: Executes the dashboard validation task.
# @PURPOSE: Executes the dashboard validation task with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Validation parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: params contains dashboard_id, environment_id, and provider_id.
# @POST: Returns a dictionary with validation results and persists them to the database.
# @SIDE_EFFECT: Captures a screenshot, calls LLM API, and writes to the database.
async def execute(self, params: Dict[str, Any]):
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("execute", f"plugin_id={self.id}"):
logger.info(f"Executing {self.name} with params: {params}")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
llm_log = log.with_source("llm") if context else log
screenshot_log = log.with_source("screenshot") if context else log
superset_log = log.with_source("superset_api") if context else log
log.info(f"Executing {self.name} with params: {params}")
dashboard_id = params.get("dashboard_id")
env_id = params.get("environment_id")
provider_id = params.get("provider_id")
task_id = params.get("_task_id")
# Helper to log to both app logger and task manager logs
def task_log(level: str, message: str, context: Optional[Dict] = None):
logger.log(getattr(logging, level.upper()), message)
if task_id:
from ...dependencies import get_task_manager
try:
tm = get_task_manager()
tm._add_log(task_id, level.upper(), message, context)
except: pass
db = SessionLocal()
try:
@@ -86,25 +85,26 @@ class DashboardValidationPlugin(PluginBase):
config_mgr = get_config_manager()
env = config_mgr.get_environment(env_id)
if not env:
log.error(f"Environment {env_id} not found")
raise ValueError(f"Environment {env_id} not found")
# 2. Get LLM Provider
llm_service = LLMProviderService(db)
db_provider = llm_service.get_provider(provider_id)
if not db_provider:
log.error(f"LLM Provider {provider_id} not found")
raise ValueError(f"LLM Provider {provider_id} not found")
logger.info(f"[DashboardValidationPlugin.execute] Retrieved provider config:")
logger.info(f"[DashboardValidationPlugin.execute] Provider ID: {db_provider.id}")
logger.info(f"[DashboardValidationPlugin.execute] Provider Name: {db_provider.name}")
logger.info(f"[DashboardValidationPlugin.execute] Provider Type: {db_provider.provider_type}")
logger.info(f"[DashboardValidationPlugin.execute] Base URL: {db_provider.base_url}")
logger.info(f"[DashboardValidationPlugin.execute] Default Model: {db_provider.default_model}")
logger.info(f"[DashboardValidationPlugin.execute] Is Active: {db_provider.is_active}")
llm_log.debug("Retrieved provider config:")
llm_log.debug(f" Provider ID: {db_provider.id}")
llm_log.debug(f" Provider Name: {db_provider.name}")
llm_log.debug(f" Provider Type: {db_provider.provider_type}")
llm_log.debug(f" Base URL: {db_provider.base_url}")
llm_log.debug(f" Default Model: {db_provider.default_model}")
llm_log.debug(f" Is Active: {db_provider.is_active}")
api_key = llm_service.get_decrypted_api_key(provider_id)
logger.info(f"[DashboardValidationPlugin.execute] API Key decrypted (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
logger.info(f"[DashboardValidationPlugin.execute] API Key Length: {len(api_key) if api_key else 0}")
llm_log.debug(f"API Key decrypted (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
# Check if API key was successfully decrypted
if not api_key:
@@ -124,7 +124,9 @@ class DashboardValidationPlugin(PluginBase):
filename = f"{dashboard_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.png"
screenshot_path = os.path.join(screenshots_dir, filename)
screenshot_log.info(f"Capturing screenshot for dashboard {dashboard_id}")
await screenshot_service.capture_dashboard(dashboard_id, screenshot_path)
screenshot_log.debug(f"Screenshot saved to: {screenshot_path}")
# 4. Fetch Logs (from Environment /api/v1/log/)
logs = []
@@ -147,6 +149,7 @@ class DashboardValidationPlugin(PluginBase):
"page_size": 100
}
superset_log.debug(f"Fetching logs for dashboard {dashboard_id}")
response = client.network.request(
method="GET",
endpoint="/log/",
@@ -162,9 +165,10 @@ class DashboardValidationPlugin(PluginBase):
if not logs:
logs = ["No recent logs found for this dashboard."]
superset_log.debug("No recent logs found for this dashboard")
except Exception as e:
logger.warning(f"Failed to fetch logs from environment: {e}")
superset_log.warning(f"Failed to fetch logs from environment: {e}")
logs = [f"Error fetching remote logs: {str(e)}"]
# 5. Analyze with LLM
@@ -175,14 +179,15 @@ class DashboardValidationPlugin(PluginBase):
default_model=db_provider.default_model
)
llm_log.info(f"Analyzing dashboard {dashboard_id} with LLM")
analysis = await llm_client.analyze_dashboard(screenshot_path, logs)
# Log analysis summary to task logs for better visibility
task_log("INFO", f"[ANALYSIS_SUMMARY] Status: {analysis['status']}")
task_log("INFO", f"[ANALYSIS_SUMMARY] Summary: {analysis['summary']}")
llm_log.info(f"[ANALYSIS_SUMMARY] Status: {analysis['status']}")
llm_log.info(f"[ANALYSIS_SUMMARY] Summary: {analysis['summary']}")
if analysis.get("issues"):
for i, issue in enumerate(analysis["issues"]):
task_log("INFO", f"[ANALYSIS_ISSUE][{i+1}] {issue.get('severity')}: {issue.get('message')} (Location: {issue.get('location', 'N/A')})")
llm_log.info(f"[ANALYSIS_ISSUE][{i+1}] {issue.get('severity')}: {issue.get('message')} (Location: {issue.get('location', 'N/A')})")
# 6. Persist Result
validation_result = ValidationResult(
@@ -207,13 +212,13 @@ class DashboardValidationPlugin(PluginBase):
# 7. Notification on failure (US1 / FR-015)
if validation_result.status == ValidationStatus.FAIL:
task_log("WARNING", f"Dashboard {dashboard_id} validation FAILED. Summary: {validation_result.summary}")
log.warning(f"Dashboard {dashboard_id} validation FAILED. Summary: {validation_result.summary}")
# Placeholder for Email/Pulse notification dispatch
# In a real implementation, we would call a NotificationService here
# with a payload containing the summary and a link to the report.
# Final log to ensure all analysis is visible in task logs
task_log("INFO", f"Validation completed for dashboard {dashboard_id}. Status: {validation_result.status.value}")
log.info(f"Validation completed for dashboard {dashboard_id}. Status: {validation_result.status.value}")
return validation_result.dict()
@@ -254,13 +259,22 @@ class DocumentationPlugin(PluginBase):
}
# [DEF:DocumentationPlugin.execute:Function]
# @PURPOSE: Executes the dataset documentation task.
# @PURPOSE: Executes the dataset documentation task with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Documentation parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: params contains dataset_id, environment_id, and provider_id.
# @POST: Returns generated documentation and updates the dataset in Superset.
# @SIDE_EFFECT: Calls LLM API and updates dataset metadata in Superset.
async def execute(self, params: Dict[str, Any]):
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("execute", f"plugin_id={self.id}"):
logger.info(f"Executing {self.name} with params: {params}")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
llm_log = log.with_source("llm") if context else log
superset_log = log.with_source("superset_api") if context else log
log.info(f"Executing {self.name} with params: {params}")
dataset_id = params.get("dataset_id")
env_id = params.get("environment_id")
@@ -273,25 +287,25 @@ class DocumentationPlugin(PluginBase):
config_mgr = get_config_manager()
env = config_mgr.get_environment(env_id)
if not env:
log.error(f"Environment {env_id} not found")
raise ValueError(f"Environment {env_id} not found")
# 2. Get LLM Provider
llm_service = LLMProviderService(db)
db_provider = llm_service.get_provider(provider_id)
if not db_provider:
log.error(f"LLM Provider {provider_id} not found")
raise ValueError(f"LLM Provider {provider_id} not found")
logger.info(f"[DocumentationPlugin.execute] Retrieved provider config:")
logger.info(f"[DocumentationPlugin.execute] Provider ID: {db_provider.id}")
logger.info(f"[DocumentationPlugin.execute] Provider Name: {db_provider.name}")
logger.info(f"[DocumentationPlugin.execute] Provider Type: {db_provider.provider_type}")
logger.info(f"[DocumentationPlugin.execute] Base URL: {db_provider.base_url}")
logger.info(f"[DocumentationPlugin.execute] Default Model: {db_provider.default_model}")
logger.info(f"[DocumentationPlugin.execute] Is Active: {db_provider.is_active}")
llm_log.debug("Retrieved provider config:")
llm_log.debug(f" Provider ID: {db_provider.id}")
llm_log.debug(f" Provider Name: {db_provider.name}")
llm_log.debug(f" Provider Type: {db_provider.provider_type}")
llm_log.debug(f" Base URL: {db_provider.base_url}")
llm_log.debug(f" Default Model: {db_provider.default_model}")
api_key = llm_service.get_decrypted_api_key(provider_id)
logger.info(f"[DocumentationPlugin.execute] API Key decrypted (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
logger.info(f"[DocumentationPlugin.execute] API Key Length: {len(api_key) if api_key else 0}")
llm_log.debug(f"API Key decrypted (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
# Check if API key was successfully decrypted
if not api_key:
@@ -305,10 +319,8 @@ class DocumentationPlugin(PluginBase):
from ...core.superset_client import SupersetClient
client = SupersetClient(env)
# Optimistic locking check (T045)
superset_log.debug(f"Fetching dataset {dataset_id}")
dataset = client.get_dataset(int(dataset_id))
# dataset structure might vary, ensure we get the right field
original_changed_on = dataset.get("changed_on_utc") or dataset.get("result", {}).get("changed_on_utc")
# Extract columns and existing descriptions
columns_data = []
@@ -318,6 +330,7 @@ class DocumentationPlugin(PluginBase):
"type": col.get("type"),
"description": col.get("description")
})
superset_log.debug(f"Extracted {len(columns_data)} columns from dataset")
# 4. Construct Prompt & Analyze (US2 / T025)
llm_client = LLMClient(
@@ -345,12 +358,10 @@ class DocumentationPlugin(PluginBase):
"""
# Using a generic chat completion for text-only US2
# We use the shared get_json_completion method from LLMClient
llm_log.info(f"Generating documentation for dataset {dataset_id}")
doc_result = await llm_client.get_json_completion([{"role": "user", "content": prompt}])
# 5. Update Metadata (US2 / T026)
# This part normally goes to mapping_service, but we implement the logic here for the plugin flow
# We'll update the dataset in Superset
update_payload = {
"description": doc_result["dataset_description"],
"columns": []
@@ -365,8 +376,11 @@ class DocumentationPlugin(PluginBase):
"description": col_doc["description"]
})
superset_log.info(f"Updating dataset {dataset_id} with generated documentation")
client.update_dataset(int(dataset_id), update_payload)
log.info(f"Documentation completed for dataset {dataset_id}")
return doc_result
finally:

View File

@@ -39,6 +39,7 @@ def schedule_dashboard_validation(dashboard_id: str, cron_expression: str, param
**_parse_cron(cron_expression)
)
logger.info(f"Scheduled validation for dashboard {dashboard_id} with cron {cron_expression}")
# [/DEF:schedule_dashboard_validation:Function]
# [DEF:_parse_cron:Function]
# @PURPOSE: Basic cron parser placeholder.
@@ -56,5 +57,6 @@ def _parse_cron(cron: str) -> Dict[str, str]:
"month": parts[3],
"day_of_week": parts[4]
}
# [/DEF:_parse_cron:Function]
# [/DEF:backend/src/plugins/llm_analysis/scheduler.py:Module]

View File

@@ -12,12 +12,12 @@ import asyncio
import base64
import json
import io
from typing import List, Optional, Dict, Any
from typing import List, Dict, Any
from PIL import Image
from playwright.async_api import async_playwright
from openai import AsyncOpenAI, RateLimitError, AuthenticationError as OpenAIAuthenticationError
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception
from .models import LLMProviderType, ValidationResult, ValidationStatus, DetectedIssue
from .models import LLMProviderType
from ...core.logger import belief_scope, logger
from ...core.config_models import Environment
@@ -96,7 +96,7 @@ class ScreenshotService:
"password": ['input[name="password"]', 'input#password', 'input[placeholder*="Password"]', 'input[type="password"]'],
"submit": ['button[type="submit"]', 'button#submit', '.btn-primary', 'input[type="submit"]']
}
logger.info(f"[DEBUG] Attempting to find login form elements...")
logger.info("[DEBUG] Attempting to find login form elements...")
try:
# Find and fill username
@@ -190,27 +190,27 @@ class ScreenshotService:
try:
# Wait for the dashboard grid to be present
await page.wait_for_selector('.dashboard-component, .dashboard-header, [data-test="dashboard-grid"]', timeout=30000)
logger.info(f"[DEBUG] Dashboard container loaded")
logger.info("[DEBUG] Dashboard container loaded")
# Wait for charts to finish loading (Superset uses loading spinners/skeletons)
# We wait until loading indicators disappear or a timeout occurs
try:
# Wait for loading indicators to disappear
await page.wait_for_selector('.loading, .ant-skeleton, .spinner', state="hidden", timeout=60000)
logger.info(f"[DEBUG] Loading indicators hidden")
except:
logger.warning(f"[DEBUG] Timeout waiting for loading indicators to hide")
logger.info("[DEBUG] Loading indicators hidden")
except Exception:
logger.warning("[DEBUG] Timeout waiting for loading indicators to hide")
# Wait for charts to actually render their content (e.g., ECharts, NVD3)
# We look for common chart containers that should have content
try:
await page.wait_for_selector('.chart-container canvas, .slice_container svg, .superset-chart-canvas, .grid-content .chart-container', timeout=60000)
logger.info(f"[DEBUG] Chart content detected")
except:
logger.warning(f"[DEBUG] Timeout waiting for chart content")
logger.info("[DEBUG] Chart content detected")
except Exception:
logger.warning("[DEBUG] Timeout waiting for chart content")
# Additional check: wait for all chart containers to have non-empty content
logger.info(f"[DEBUG] Waiting for all charts to have rendered content...")
logger.info("[DEBUG] Waiting for all charts to have rendered content...")
await page.wait_for_function("""() => {
const charts = document.querySelectorAll('.chart-container, .slice_container');
if (charts.length === 0) return true; // No charts to wait for
@@ -223,10 +223,10 @@ class ScreenshotService:
return hasCanvas || hasSvg || hasContent;
});
}""", timeout=60000)
logger.info(f"[DEBUG] All charts have rendered content")
logger.info("[DEBUG] All charts have rendered content")
# Scroll to bottom and back to top to trigger lazy loading of all charts
logger.info(f"[DEBUG] Scrolling to trigger lazy loading...")
logger.info("[DEBUG] Scrolling to trigger lazy loading...")
await page.evaluate("""async () => {
const delay = ms => new Promise(resolve => setTimeout(resolve, ms));
for (let i = 0; i < document.body.scrollHeight; i += 500) {
@@ -241,7 +241,7 @@ class ScreenshotService:
logger.warning(f"[DEBUG] Dashboard content wait failed: {e}, proceeding anyway after delay")
# Final stabilization delay - increased for complex dashboards
logger.info(f"[DEBUG] Final stabilization delay...")
logger.info("[DEBUG] Final stabilization delay...")
await asyncio.sleep(15)
# Logic to handle tabs and full-page capture
@@ -251,7 +251,8 @@ class ScreenshotService:
processed_tabs = set()
async def switch_tabs(depth=0):
if depth > 3: return # Limit recursion depth
if depth > 3:
return # Limit recursion depth
tab_selectors = [
'.ant-tabs-nav-list .ant-tabs-tab',
@@ -262,7 +263,8 @@ class ScreenshotService:
found_tabs = []
for selector in tab_selectors:
found_tabs = await page.locator(selector).all()
if found_tabs: break
if found_tabs:
break
if found_tabs:
logger.info(f"[DEBUG][TabSwitching] Found {len(found_tabs)} tabs at depth {depth}")
@@ -292,7 +294,8 @@ class ScreenshotService:
if "ant-tabs-tab-active" not in (await first_tab.get_attribute("class") or ""):
await first_tab.click()
await asyncio.sleep(1)
except: pass
except Exception:
pass
await switch_tabs()
@@ -423,7 +426,7 @@ class LLMClient:
self.default_model = default_model
# DEBUG: Log initialization parameters (without exposing full API key)
logger.info(f"[LLMClient.__init__] Initializing LLM client:")
logger.info("[LLMClient.__init__] Initializing LLM client:")
logger.info(f"[LLMClient.__init__] Provider Type: {provider_type}")
logger.info(f"[LLMClient.__init__] Base URL: {base_url}")
logger.info(f"[LLMClient.__init__] Default Model: {default_model}")

View File

@@ -3,6 +3,7 @@
# @PURPOSE: Implements a plugin for mapping dataset columns using external database connections or Excel files.
# @LAYER: Plugins
# @RELATION: Inherits from PluginBase. Uses DatasetMapper from superset_tool.
# @RELATION: USES -> TaskContext
# @CONSTRAINT: Must use belief_scope for logging.
# [SECTION: IMPORTS]
@@ -13,6 +14,7 @@ from ..core.logger import logger, belief_scope
from ..core.database import SessionLocal
from ..models.connection import ConnectionConfig
from ..core.utils.dataset_mapper import DatasetMapper
from ..core.task_manager.context import TaskContext
# [/SECTION]
# [DEF:MapperPlugin:Class]
@@ -128,19 +130,27 @@ class MapperPlugin(PluginBase):
# [/DEF:get_schema:Function]
# [DEF:execute:Function]
# @PURPOSE: Executes the dataset mapping logic.
# @PURPOSE: Executes the dataset mapping logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Mapping parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Params contain valid 'env', 'dataset_id', and 'source'. params must be a dictionary.
# @POST: Updates the dataset in Superset.
# @RETURN: Dict[str, Any] - Execution status.
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]:
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("execute"):
env_name = params.get("env")
dataset_id = params.get("dataset_id")
source = params.get("source")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
superset_log = log.with_source("superset_api") if context else log
db_log = log.with_source("postgres") if context else log
if not env_name or dataset_id is None or not source:
logger.error("[MapperPlugin.execute][State] Missing required parameters.")
log.error("Missing required parameters: env, dataset_id, source")
raise ValueError("Missing required parameters: env, dataset_id, source")
# Get config and initialize client
@@ -148,7 +158,7 @@ class MapperPlugin(PluginBase):
config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name)
if not env_config:
logger.error(f"[MapperPlugin.execute][State] Environment '{env_name}' not found.")
log.error(f"Environment '{env_name}' not found in configuration.")
raise ValueError(f"Environment '{env_name}' not found in configuration.")
client = SupersetClient(env_config)
@@ -158,7 +168,7 @@ class MapperPlugin(PluginBase):
if source == "postgres":
connection_id = params.get("connection_id")
if not connection_id:
logger.error("[MapperPlugin.execute][State] connection_id is required for postgres source.")
log.error("connection_id is required for postgres source.")
raise ValueError("connection_id is required for postgres source.")
# Load connection from DB
@@ -166,7 +176,7 @@ class MapperPlugin(PluginBase):
try:
conn_config = db.query(ConnectionConfig).filter(ConnectionConfig.id == connection_id).first()
if not conn_config:
logger.error(f"[MapperPlugin.execute][State] Connection {connection_id} not found.")
db_log.error(f"Connection {connection_id} not found.")
raise ValueError(f"Connection {connection_id} not found.")
postgres_config = {
@@ -176,10 +186,11 @@ class MapperPlugin(PluginBase):
'host': conn_config.host,
'port': str(conn_config.port) if conn_config.port else '5432'
}
db_log.debug(f"Loaded connection config for {conn_config.host}:{conn_config.port}/{conn_config.database}")
finally:
db.close()
logger.info(f"[MapperPlugin.execute][Action] Starting mapping for dataset {dataset_id} in {env_name}")
log.info(f"Starting mapping for dataset {dataset_id} in {env_name}")
mapper = DatasetMapper()
@@ -193,10 +204,10 @@ class MapperPlugin(PluginBase):
table_name=params.get("table_name"),
table_schema=params.get("table_schema") or "public"
)
logger.info(f"[MapperPlugin.execute][Success] Mapping completed for dataset {dataset_id}")
superset_log.info(f"Mapping completed for dataset {dataset_id}")
return {"status": "success", "dataset_id": dataset_id}
except Exception as e:
logger.error(f"[MapperPlugin.execute][Failure] Mapping failed: {e}")
log.error(f"Mapping failed: {e}")
raise
# [/DEF:execute:Function]

View File

@@ -5,20 +5,20 @@
# @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils
# @RELATION: USES -> TaskContext
from typing import Dict, Any, List
from pathlib import Path
import zipfile
from typing import Dict, Any, Optional
import re
from ..core.plugin_base import PluginBase
from ..core.logger import belief_scope
from ..core.logger import belief_scope, logger as app_logger
from ..core.superset_client import SupersetClient
from ..core.utils.fileio import create_temp_file, update_yamls, create_dashboard_export
from ..core.utils.fileio import create_temp_file
from ..dependencies import get_config_manager
from ..core.migration_engine import MigrationEngine
from ..core.database import SessionLocal
from ..models.mapping import DatabaseMapping, Environment
from ..core.task_manager.context import TaskContext
# [DEF:MigrationPlugin:Class]
# @PURPOSE: Implementation of the migration plugin logic.
@@ -132,11 +132,12 @@ class MigrationPlugin(PluginBase):
# [/DEF:get_schema:Function]
# [DEF:execute:Function]
# @PURPOSE: Executes the dashboard migration logic.
# @PURPOSE: Executes the dashboard migration logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Migration parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Source and target environments must be configured.
# @POST: Selected dashboards are migrated.
async def execute(self, params: Dict[str, Any]):
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("MigrationPlugin.execute"):
source_env_id = params.get("source_env_id")
target_env_id = params.get("target_env_id")
@@ -148,8 +149,8 @@ class MigrationPlugin(PluginBase):
dashboard_regex = params.get("dashboard_regex")
replace_db_config = params.get("replace_db_config", False)
from_db_id = params.get("from_db_id")
to_db_id = params.get("to_db_id")
params.get("from_db_id")
params.get("to_db_id")
# [DEF:MigrationPlugin.execute:Action]
# @PURPOSE: Execute the migration logic with proper task logging.
@@ -157,74 +158,15 @@ class MigrationPlugin(PluginBase):
from ..dependencies import get_task_manager
tm = get_task_manager()
class TaskLoggerProxy:
# [DEF:__init__:Function]
# @PURPOSE: Initializes the proxy logger.
# @PRE: None.
# @POST: Instance is initialized.
def __init__(self):
with belief_scope("__init__"):
# Initialize parent with dummy values since we override methods
pass
# [/DEF:__init__:Function]
# [DEF:debug:Function]
# @PURPOSE: Logs a debug message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def debug(self, msg, *args, extra=None, **kwargs):
with belief_scope("debug"):
if task_id: tm._add_log(task_id, "DEBUG", msg, extra or {})
# [/DEF:debug:Function]
# [DEF:info:Function]
# @PURPOSE: Logs an info message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def info(self, msg, *args, extra=None, **kwargs):
with belief_scope("info"):
if task_id: tm._add_log(task_id, "INFO", msg, extra or {})
# [/DEF:info:Function]
# [DEF:warning:Function]
# @PURPOSE: Logs a warning message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def warning(self, msg, *args, extra=None, **kwargs):
with belief_scope("warning"):
if task_id: tm._add_log(task_id, "WARNING", msg, extra or {})
# [/DEF:warning:Function]
# [DEF:error:Function]
# @PURPOSE: Logs an error message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def error(self, msg, *args, extra=None, **kwargs):
with belief_scope("error"):
if task_id: tm._add_log(task_id, "ERROR", msg, extra or {})
# [/DEF:error:Function]
# [DEF:critical:Function]
# @PURPOSE: Logs a critical message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def critical(self, msg, *args, extra=None, **kwargs):
with belief_scope("critical"):
if task_id: tm._add_log(task_id, "ERROR", msg, extra or {})
# [/DEF:critical:Function]
# [DEF:exception:Function]
# @PURPOSE: Logs an exception message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def exception(self, msg, *args, **kwargs):
with belief_scope("exception"):
if task_id: tm._add_log(task_id, "ERROR", msg, {"exception": True})
# [/DEF:exception:Function]
logger = TaskLoggerProxy()
logger.info(f"[MigrationPlugin][Entry] Starting migration task.")
logger.info(f"[MigrationPlugin][Action] Params: {params}")
# Use TaskContext logger if available, otherwise fall back to app_logger
log = context.logger if context else app_logger
# Create sub-loggers for different components
superset_log = log.with_source("superset_api") if context else log
migration_log = log.with_source("migration") if context else log
log.info("Starting migration task.")
log.debug(f"Params: {params}")
try:
with belief_scope("execute"):
@@ -251,7 +193,7 @@ class MigrationPlugin(PluginBase):
from_env_name = src_env.name
to_env_name = tgt_env.name
logger.info(f"[MigrationPlugin][State] Resolved environments: {from_env_name} -> {to_env_name}")
log.info(f"Resolved environments: {from_env_name} -> {to_env_name}")
from_c = SupersetClient(src_env)
to_c = SupersetClient(tgt_env)
@@ -270,29 +212,36 @@ class MigrationPlugin(PluginBase):
d for d in all_dashboards if re.search(regex_str, d["dashboard_title"], re.IGNORECASE)
]
else:
logger.warning("[MigrationPlugin][State] No selection criteria provided (selected_ids or dashboard_regex).")
log.warning("No selection criteria provided (selected_ids or dashboard_regex).")
return
if not dashboards_to_migrate:
logger.warning("[MigrationPlugin][State] No dashboards found matching criteria.")
log.warning("No dashboards found matching criteria.")
return
# Fetch mappings from database
db_mapping = {}
# Get mappings from params
db_mapping = params.get("db_mappings", {})
if not isinstance(db_mapping, dict):
db_mapping = {}
# Fetch additional mappings from database if requested
if replace_db_config:
db = SessionLocal()
try:
# Find environment IDs by name
src_env = db.query(Environment).filter(Environment.name == from_env_name).first()
tgt_env = db.query(Environment).filter(Environment.name == to_env_name).first()
src_env_db = db.query(Environment).filter(Environment.name == from_env_name).first()
tgt_env_db = db.query(Environment).filter(Environment.name == to_env_name).first()
if src_env and tgt_env:
mappings = db.query(DatabaseMapping).filter(
DatabaseMapping.source_env_id == src_env.id,
DatabaseMapping.target_env_id == tgt_env.id
if src_env_db and tgt_env_db:
stored_mappings = db.query(DatabaseMapping).filter(
DatabaseMapping.source_env_id == src_env_db.id,
DatabaseMapping.target_env_id == tgt_env_db.id
).all()
db_mapping = {m.source_db_uuid: m.target_db_uuid for m in mappings}
logger.info(f"[MigrationPlugin][State] Loaded {len(db_mapping)} database mappings.")
# Provided mappings override stored ones
stored_map_dict = {m.source_db_uuid: m.target_db_uuid for m in stored_mappings}
stored_map_dict.update(db_mapping)
db_mapping = stored_map_dict
log.info(f"Loaded {len(stored_mappings)} database mappings from database.")
finally:
db.close()
@@ -311,7 +260,7 @@ class MigrationPlugin(PluginBase):
if not success and replace_db_config:
# Signal missing mapping and wait (only if we care about mappings)
if task_id:
logger.info(f"[MigrationPlugin][Action] Pausing for missing mapping in task {task_id}")
log.info(f"Pausing for missing mapping in task {task_id}")
# In a real scenario, we'd pass the missing DB info to the frontend
# For this task, we'll just simulate the wait
await tm.wait_for_resolution(task_id)
@@ -333,9 +282,9 @@ class MigrationPlugin(PluginBase):
if success:
to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug)
else:
logger.error(f"[MigrationPlugin][Failure] Failed to transform ZIP for dashboard {title}")
migration_log.error(f"Failed to transform ZIP for dashboard {title}")
logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported.")
superset_log.info(f"Dashboard {title} imported.")
except Exception as exc:
# Check for password error
error_msg = str(exc)
@@ -357,7 +306,7 @@ class MigrationPlugin(PluginBase):
if match_alt:
db_name = match_alt.group(1)
logger.warning(f"[MigrationPlugin][Action] Detected missing password for database: {db_name}")
app_logger.warning(f"[MigrationPlugin][Action] Detected missing password for database: {db_name}")
if task_id:
input_request = {
@@ -376,19 +325,19 @@ class MigrationPlugin(PluginBase):
# Retry import with password
if passwords:
logger.info(f"[MigrationPlugin][Action] Retrying import for {title} with provided passwords.")
app_logger.info(f"[MigrationPlugin][Action] Retrying import for {title} with provided passwords.")
to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug, passwords=passwords)
logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported after password injection.")
app_logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported after password injection.")
# Clear passwords from params after use for security
if "passwords" in task.params:
del task.params["passwords"]
continue
logger.error(f"[MigrationPlugin][Failure] Failed to migrate dashboard {title}: {exc}", exc_info=True)
app_logger.error(f"[MigrationPlugin][Failure] Failed to migrate dashboard {title}: {exc}", exc_info=True)
logger.info("[MigrationPlugin][Exit] Migration finished.")
app_logger.info("[MigrationPlugin][Exit] Migration finished.")
except Exception as e:
logger.critical(f"[MigrationPlugin][Failure] Fatal error during migration: {e}", exc_info=True)
app_logger.critical(f"[MigrationPlugin][Failure] Fatal error during migration: {e}", exc_info=True)
raise e
# [/DEF:MigrationPlugin.execute:Action]
# [/DEF:execute:Function]

View File

@@ -3,14 +3,16 @@
# @PURPOSE: Implements a plugin for searching text patterns across all datasets in a specific Superset environment.
# @LAYER: Plugins
# @RELATION: Inherits from PluginBase. Uses SupersetClient from core.
# @RELATION: USES -> TaskContext
# @CONSTRAINT: Must use belief_scope for logging.
# [SECTION: IMPORTS]
import re
from typing import Dict, Any, List, Optional
from typing import Dict, Any, Optional
from ..core.plugin_base import PluginBase
from ..core.superset_client import SupersetClient
from ..core.logger import logger, belief_scope
from ..core.task_manager.context import TaskContext
# [/SECTION]
# [DEF:SearchPlugin:Class]
@@ -99,18 +101,26 @@ class SearchPlugin(PluginBase):
# [/DEF:get_schema:Function]
# [DEF:execute:Function]
# @PURPOSE: Executes the dataset search logic.
# @PURPOSE: Executes the dataset search logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Search parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Params contain valid 'env' and 'query'.
# @POST: Returns a dictionary with count and results list.
# @RETURN: Dict[str, Any] - Search results.
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]:
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("SearchPlugin.execute", f"params={params}"):
env_name = params.get("env")
search_query = params.get("query")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
log.with_source("superset_api") if context else log
search_log = log.with_source("search") if context else log
if not env_name or not search_query:
logger.error("[SearchPlugin.execute][State] Missing required parameters.")
log.error("Missing required parameters: env, query")
raise ValueError("Missing required parameters: env, query")
# Get config and initialize client
@@ -118,20 +128,20 @@ class SearchPlugin(PluginBase):
config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name)
if not env_config:
logger.error(f"[SearchPlugin.execute][State] Environment '{env_name}' not found.")
log.error(f"Environment '{env_name}' not found in configuration.")
raise ValueError(f"Environment '{env_name}' not found in configuration.")
client = SupersetClient(env_config)
client.authenticate()
logger.info(f"[SearchPlugin.execute][Action] Searching for pattern: '{search_query}' in environment: {env_name}")
log.info(f"Searching for pattern: '{search_query}' in environment: {env_name}")
try:
# Ported logic from search_script.py
_, datasets = client.get_datasets(query={"columns": ["id", "table_name", "sql", "database", "columns"]})
if not datasets:
logger.warning("[SearchPlugin.execute][State] No datasets found.")
search_log.warning("No datasets found.")
return {"count": 0, "results": []}
pattern = re.compile(search_query, re.IGNORECASE)
@@ -155,17 +165,17 @@ class SearchPlugin(PluginBase):
"full_value": value_str
})
logger.info(f"[SearchPlugin.execute][Success] Found matches in {len(results)} locations.")
search_log.info(f"Found matches in {len(results)} locations.")
return {
"count": len(results),
"results": results
}
except re.error as e:
logger.error(f"[SearchPlugin.execute][Failure] Invalid regex pattern: {e}")
search_log.error(f"Invalid regex pattern: {e}")
raise ValueError(f"Invalid regex pattern: {e}")
except Exception as e:
logger.error(f"[SearchPlugin.execute][Failure] Error during search: {e}")
log.error(f"Error during search: {e}")
raise
# [/DEF:execute:Function]

View File

@@ -5,6 +5,7 @@
# @LAYER: App
# @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> backend.src.models.storage
# @RELATION: USES -> TaskContext
#
# @INVARIANT: All file operations must be restricted to the configured storage root.
@@ -18,8 +19,9 @@ from fastapi import UploadFile
from ...core.plugin_base import PluginBase
from ...core.logger import belief_scope, logger
from ...models.storage import StoredFile, FileCategory, StorageConfig
from ...models.storage import StoredFile, FileCategory
from ...dependencies import get_config_manager
from ...core.task_manager.context import TaskContext
# [/SECTION]
# [DEF:StoragePlugin:Class]
@@ -112,12 +114,21 @@ class StoragePlugin(PluginBase):
# [/DEF:get_schema:Function]
# [DEF:execute:Function]
# @PURPOSE: Executes storage-related tasks (placeholder for PluginBase compliance).
# @PURPOSE: Executes storage-related tasks with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Storage parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: params must match the plugin schema.
# @POST: Task is executed and logged.
async def execute(self, params: Dict[str, Any]):
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("StoragePlugin:execute"):
logger.info(f"[StoragePlugin][Action] Executing with params: {params}")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
storage_log = log.with_source("storage") if context else log
log.with_source("filesystem") if context else log
storage_log.info(f"Executing with params: {params}")
# [/DEF:execute:Function]
# [DEF:get_storage_root:Function]

View File

@@ -10,7 +10,7 @@
# [SECTION: IMPORTS]
from typing import List, Optional
from pydantic import BaseModel, EmailStr, Field
from pydantic import BaseModel, EmailStr
from datetime import datetime
# [/SECTION]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.scripts.create_admin:Module]
#
# @TIER: STANDARD
# @SEMANTICS: admin, setup, user, auth, cli
# @PURPOSE: CLI tool for creating the initial admin user.
# @LAYER: Scripts
@@ -19,7 +20,7 @@ sys.path.append(str(Path(__file__).parent.parent.parent))
from src.core.database import AuthSessionLocal, init_db
from src.core.auth.security import get_password_hash
from src.models.auth import User, Role, Permission
from src.models.auth import User, Role
from src.core.logger import logger, belief_scope
# [/SECTION]

View File

@@ -9,13 +9,12 @@
# [SECTION: IMPORTS]
import sys
import os
from pathlib import Path
# Add src to path
sys.path.append(str(Path(__file__).parent.parent.parent))
from src.core.database import init_db, auth_engine
from src.core.database import init_db
from src.core.logger import logger, belief_scope
from src.scripts.seed_permissions import seed_permissions
# [/SECTION]

View File

@@ -0,0 +1,163 @@
#!/usr/bin/env python3
"""
Script to test dataset-to-dashboard relationships from Superset API.
Usage:
cd backend && .venv/bin/python3 src/scripts/test_dataset_dashboard_relations.py
"""
import json
import sys
from pathlib import Path
# Add src to path (parent of scripts directory)
sys.path.append(str(Path(__file__).parent.parent.parent))
from src.core.superset_client import SupersetClient
from src.core.config_manager import ConfigManager
from src.core.logger import logger
def test_dashboard_dataset_relations():
"""Test fetching dataset-to-dashboard relationships."""
# Load environment from existing config
config_manager = ConfigManager()
environments = config_manager.get_environments()
if not environments:
logger.error("No environments configured!")
return
# Use first available environment
env = environments[0]
logger.info(f"Using environment: {env.name} ({env.url})")
client = SupersetClient(env)
try:
# Authenticate
logger.info("Authenticating to Superset...")
client.authenticate()
logger.info("Authentication successful!")
# Test dashboard ID 13
dashboard_id = 13
logger.info(f"\n=== Fetching Dashboard {dashboard_id} ===")
dashboard = client.network.request(method="GET", endpoint=f"/dashboard/{dashboard_id}")
print("\nDashboard structure:")
print(f" ID: {dashboard.get('id')}")
print(f" Title: {dashboard.get('dashboard_title')}")
print(f" Published: {dashboard.get('published')}")
# Check for slices/charts
if 'slices' in dashboard:
logger.info(f"\n Found {len(dashboard['slices'])} slices/charts in dashboard")
for i, slice_data in enumerate(dashboard['slices'][:5]): # Show first 5
print(f" Slice {i+1}:")
print(f" ID: {slice_data.get('slice_id')}")
print(f" Name: {slice_data.get('slice_name')}")
# Check for datasource_id
if 'datasource_id' in slice_data:
print(f" Datasource ID: {slice_data['datasource_id']}")
if 'datasource_name' in slice_data:
print(f" Datasource Name: {slice_data['datasource_name']}")
if 'datasource_type' in slice_data:
print(f" Datasource Type: {slice_data['datasource_type']}")
else:
logger.warning(" No 'slices' field found in dashboard response")
logger.info(f" Available fields: {list(dashboard.keys())}")
# Test dataset ID 26
dataset_id = 26
logger.info(f"\n=== Fetching Dataset {dataset_id} ===")
dataset = client.get_dataset(dataset_id)
print("\nDataset structure:")
print(f" ID: {dataset.get('id')}")
print(f" Table Name: {dataset.get('table_name')}")
print(f" Schema: {dataset.get('schema')}")
print(f" Database: {dataset.get('database', {}).get('database_name', 'Unknown')}")
# Check for dashboards that use this dataset
logger.info(f"\n=== Finding Dashboards using Dataset {dataset_id} ===")
# Method: Use Superset's related_objects API
try:
logger.info(f" Using /api/v1/dataset/{dataset_id}/related_objects endpoint...")
related_objects = client.network.request(
method="GET",
endpoint=f"/dataset/{dataset_id}/related_objects"
)
logger.info(f" Related objects response type: {type(related_objects)}")
logger.info(f" Related objects keys: {list(related_objects.keys()) if isinstance(related_objects, dict) else 'N/A'}")
# Check for dashboards in related objects
if 'dashboards' in related_objects:
dashboards = related_objects['dashboards']
logger.info(f" Found {len(dashboards)} dashboards using this dataset:")
for dash in dashboards:
logger.info(f" - Dashboard ID {dash.get('id')}: {dash.get('dashboard_title', dash.get('title', 'Unknown'))}")
elif 'result' in related_objects:
# Some Superset versions use 'result' wrapper
result = related_objects['result']
if 'dashboards' in result:
dashboards = result['dashboards']
logger.info(f" Found {len(dashboards)} dashboards using this dataset:")
for dash in dashboards:
logger.info(f" - Dashboard ID {dash.get('id')}: {dash.get('dashboard_title', dash.get('title', 'Unknown'))}")
else:
logger.warning(f" No 'dashboards' key in result. Keys: {list(result.keys())}")
else:
logger.warning(f" No 'dashboards' key in response. Available keys: {list(related_objects.keys())}")
logger.info(f" Full related_objects response:")
print(json.dumps(related_objects, indent=2, default=str)[:1000])
except Exception as e:
logger.error(f" Error fetching related objects: {e}")
import traceback
traceback.print_exc()
# Method 2: Try to use the position_json from dashboard
logger.info(f"\n=== Analyzing Dashboard Position JSON ===")
if 'position_json' in dashboard:
position_data = json.loads(dashboard['position_json'])
logger.info(f" Position data type: {type(position_data)}")
# Look for datasource references
datasource_ids = set()
if isinstance(position_data, dict):
for key, value in position_data.items():
if 'datasource' in key.lower() or key == 'DASHBOARD_VERSION_KEY':
logger.debug(f" Key: {key}, Value type: {type(value)}")
elif isinstance(position_data, list):
logger.info(f" Position data has {len(position_data)} items")
for item in position_data[:3]: # Show first 3
logger.debug(f" Item: {type(item)}, keys: {list(item.keys()) if isinstance(item, dict) else 'N/A'}")
if isinstance(item, dict):
if 'datasource_id' in item:
datasource_ids.add(item['datasource_id'])
if datasource_ids:
logger.info(f" Found datasource IDs: {datasource_ids}")
# Save full response for analysis
output_file = Path(__file__).parent / "dataset_dashboard_analysis.json"
with open(output_file, 'w') as f:
json.dump({
'dashboard': dashboard,
'dataset': dataset
}, f, indent=2, default=str)
logger.info(f"\nFull response saved to: {output_file}")
except Exception as e:
logger.error(f"Error: {e}", exc_info=True)
raise
if __name__ == "__main__":
test_dashboard_dataset_relations()

View File

@@ -0,0 +1,20 @@
# [DEF:backend.src.services:Module]
# @TIER: STANDARD
# @SEMANTICS: services, package, init
# @PURPOSE: Package initialization for services module
# @LAYER: Core
# @RELATION: EXPORTS -> resource_service, mapping_service
# @NOTE: Only export services that don't cause circular imports
# @NOTE: GitService, AuthService, LLMProviderService have circular import issues - import directly when needed
# Lazy loading to avoid import issues in tests
__all__ = ['MappingService', 'ResourceService']
def __getattr__(name):
if name == 'MappingService':
from .mapping_service import MappingService
return MappingService
if name == 'ResourceService':
from .resource_service import ResourceService
return ResourceService
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")

View File

@@ -0,0 +1,212 @@
# [DEF:backend.src.services.__tests__.test_resource_service:Module]
# @TIER: STANDARD
# @PURPOSE: Unit tests for ResourceService
# @LAYER: Service
# @RELATION: TESTS -> backend.src.services.resource_service
# @RELATION: VERIFIES -> ResourceService
import pytest
from unittest.mock import MagicMock, patch, AsyncMock
from datetime import datetime
# [DEF:test_get_dashboards_with_status:Function]
# @TEST: get_dashboards_with_status returns dashboards with git and task status
# @PRE: SupersetClient returns dashboard list
# @POST: Each dashboard has git_status and last_task fields
@pytest.mark.asyncio
async def test_get_dashboards_with_status():
with patch("src.services.resource_service.SupersetClient") as mock_client, \
patch("src.services.resource_service.GitService"):
from src.services.resource_service import ResourceService
service = ResourceService()
# Mock Superset response
mock_client.return_value.get_dashboards_summary.return_value = [
{"id": 1, "title": "Dashboard 1", "slug": "dash-1"},
{"id": 2, "title": "Dashboard 2", "slug": "dash-2"}
]
# Mock tasks
mock_task = MagicMock()
mock_task.id = "task-123"
mock_task.status = "SUCCESS"
mock_task.params = {"resource_id": "dashboard-1"}
mock_task.created_at = datetime.now()
env = MagicMock()
env.id = "prod"
result = await service.get_dashboards_with_status(env, [mock_task])
assert len(result) == 2
assert result[0]["id"] == 1
assert "git_status" in result[0]
assert "last_task" in result[0]
assert result[0]["last_task"]["task_id"] == "task-123"
# [/DEF:test_get_dashboards_with_status:Function]
# [DEF:test_get_datasets_with_status:Function]
# @TEST: get_datasets_with_status returns datasets with task status
# @PRE: SupersetClient returns dataset list
# @POST: Each dataset has last_task field
@pytest.mark.asyncio
async def test_get_datasets_with_status():
with patch("src.services.resource_service.SupersetClient") as mock_client:
from src.services.resource_service import ResourceService
service = ResourceService()
# Mock Superset response
mock_client.return_value.get_datasets_summary.return_value = [
{"id": 1, "table_name": "users", "schema": "public", "database": "app"},
{"id": 2, "table_name": "orders", "schema": "public", "database": "app"}
]
# Mock tasks
mock_task = MagicMock()
mock_task.id = "task-456"
mock_task.status = "RUNNING"
mock_task.params = {"resource_id": "dataset-1"}
mock_task.created_at = datetime.now()
env = MagicMock()
env.id = "prod"
result = await service.get_datasets_with_status(env, [mock_task])
assert len(result) == 2
assert result[0]["table_name"] == "users"
assert "last_task" in result[0]
assert result[0]["last_task"]["task_id"] == "task-456"
assert result[0]["last_task"]["status"] == "RUNNING"
# [/DEF:test_get_datasets_with_status:Function]
# [DEF:test_get_activity_summary:Function]
# @TEST: get_activity_summary returns active count and recent tasks
# @PRE: tasks list provided
# @POST: Returns dict with active_count and recent_tasks
def test_get_activity_summary():
from src.services.resource_service import ResourceService
service = ResourceService()
# Create mock tasks
task1 = MagicMock()
task1.id = "task-1"
task1.status = "RUNNING"
task1.params = {"resource_name": "Dashboard 1", "resource_type": "dashboard"}
task1.created_at = datetime(2024, 1, 1, 10, 0, 0)
task2 = MagicMock()
task2.id = "task-2"
task2.status = "SUCCESS"
task2.params = {"resource_name": "Dataset 1", "resource_type": "dataset"}
task2.created_at = datetime(2024, 1, 1, 9, 0, 0)
task3 = MagicMock()
task3.id = "task-3"
task3.status = "WAITING_INPUT"
task3.params = {"resource_name": "Dashboard 2", "resource_type": "dashboard"}
task3.created_at = datetime(2024, 1, 1, 8, 0, 0)
result = service.get_activity_summary([task1, task2, task3])
assert result["active_count"] == 2 # RUNNING + WAITING_INPUT
assert len(result["recent_tasks"]) == 3
# [/DEF:test_get_activity_summary:Function]
# [DEF:test_get_git_status_for_dashboard_no_repo:Function]
# @TEST: _get_git_status_for_dashboard returns None when no repo exists
# @PRE: GitService returns None for repo
# @POST: Returns None
def test_get_git_status_for_dashboard_no_repo():
with patch("src.services.resource_service.GitService") as mock_git:
from src.services.resource_service import ResourceService
service = ResourceService()
mock_git.return_value.get_repo.return_value = None
result = service._get_git_status_for_dashboard(123)
assert result is None
# [/DEF:test_get_git_status_for_dashboard_no_repo:Function]
# [DEF:test_get_last_task_for_resource:Function]
# @TEST: _get_last_task_for_resource returns most recent task for resource
# @PRE: tasks list with matching resource_id
# @POST: Returns task summary with task_id and status
def test_get_last_task_for_resource():
from src.services.resource_service import ResourceService
service = ResourceService()
# Create mock tasks
task1 = MagicMock()
task1.id = "task-old"
task1.status = "SUCCESS"
task1.params = {"resource_id": "dashboard-1"}
task1.created_at = datetime(2024, 1, 1, 10, 0, 0)
task2 = MagicMock()
task2.id = "task-new"
task2.status = "RUNNING"
task2.params = {"resource_id": "dashboard-1"}
task2.created_at = datetime(2024, 1, 1, 12, 0, 0)
result = service._get_last_task_for_resource("dashboard-1", [task1, task2])
assert result is not None
assert result["task_id"] == "task-new" # Most recent
assert result["status"] == "RUNNING"
# [/DEF:test_get_last_task_for_resource:Function]
# [DEF:test_extract_resource_name_from_task:Function]
# @TEST: _extract_resource_name_from_task extracts name from params
# @PRE: task has resource_name in params
# @POST: Returns resource name or fallback
def test_extract_resource_name_from_task():
from src.services.resource_service import ResourceService
service = ResourceService()
# Task with resource_name
task = MagicMock()
task.id = "task-123"
task.params = {"resource_name": "My Dashboard"}
result = service._extract_resource_name_from_task(task)
assert result == "My Dashboard"
# Task without resource_name
task2 = MagicMock()
task2.id = "task-456"
task2.params = {}
result2 = service._extract_resource_name_from_task(task2)
assert "task-456" in result2
# [/DEF:test_extract_resource_name_from_task:Function]
# [/DEF:backend.src.services.__tests__.test_resource_service:Module]

View File

@@ -10,11 +10,11 @@
# @INVARIANT: Authentication must verify both credentials and account status.
# [SECTION: IMPORTS]
from typing import Optional, Dict, Any, List
from typing import Dict, Any
from sqlalchemy.orm import Session
from ..models.auth import User, Role
from ..core.auth.repository import AuthRepository
from ..core.auth.security import verify_password, get_password_hash
from ..core.auth.security import verify_password
from ..core.auth.jwt import create_access_token
from ..core.logger import belief_scope
# [/SECTION]

View File

@@ -10,11 +10,10 @@
# @INVARIANT: All Git operations must be performed on a valid local directory.
import os
import shutil
import httpx
from git import Repo, RemoteProgress
from git import Repo
from fastapi import HTTPException
from typing import List, Optional
from typing import List
from datetime import datetime
from src.core.logger import logger, belief_scope
from src.models.git import GitProvider
@@ -167,7 +166,7 @@ class GitService:
# Handle empty repository case (no commits)
if not repo.heads and not repo.remotes:
logger.warning(f"[create_branch][Action] Repository is empty. Creating initial commit to enable branching.")
logger.warning("[create_branch][Action] Repository is empty. Creating initial commit to enable branching.")
readme_path = os.path.join(repo.working_dir, "README.md")
if not os.path.exists(readme_path):
with open(readme_path, "w") as f:
@@ -178,7 +177,7 @@ class GitService:
# Verify source branch exists
try:
repo.commit(from_branch)
except:
except Exception:
logger.warning(f"[create_branch][Action] Source branch {from_branch} not found, using HEAD")
from_branch = repo.head

View File

@@ -9,49 +9,80 @@
from typing import List, Optional
from sqlalchemy.orm import Session
from ..models.llm import LLMProvider
from ..plugins.llm_analysis.models import LLMProviderConfig, LLMProviderType
from ..plugins.llm_analysis.models import LLMProviderConfig
from ..core.logger import belief_scope, logger
from cryptography.fernet import Fernet
import os
# [DEF:EncryptionManager:Class]
# @TIER: CRITICAL
# @PURPOSE: Handles encryption and decryption of sensitive data like API keys.
# @INVARIANT: Uses a secret key from environment or a default one (fallback only for dev).
class EncryptionManager:
# @INVARIANT: Uses a secret key from environment or a default one (fallback only for dev).
# [DEF:EncryptionManager.__init__:Function]
# @PURPOSE: Initialize the encryption manager with a Fernet key.
# @PRE: ENCRYPTION_KEY env var must be set or use default dev key.
# @POST: Fernet instance ready for encryption/decryption.
def __init__(self):
self.key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
self.fernet = Fernet(self.key)
# [/DEF:EncryptionManager.__init__:Function]
# [DEF:EncryptionManager.encrypt:Function]
# @PURPOSE: Encrypt a plaintext string.
# @PRE: data must be a non-empty string.
# @POST: Returns encrypted string.
def encrypt(self, data: str) -> str:
return self.fernet.encrypt(data.encode()).decode()
# [/DEF:EncryptionManager.encrypt:Function]
# [DEF:EncryptionManager.decrypt:Function]
# @PURPOSE: Decrypt an encrypted string.
# @PRE: encrypted_data must be a valid Fernet-encrypted string.
# @POST: Returns original plaintext string.
def decrypt(self, encrypted_data: str) -> str:
return self.fernet.decrypt(encrypted_data.encode()).decode()
# [/DEF:EncryptionManager.decrypt:Function]
# [/DEF:EncryptionManager:Class]
# [DEF:LLMProviderService:Class]
# @TIER: STANDARD
# @PURPOSE: Service to manage LLM provider lifecycle.
class LLMProviderService:
# [DEF:LLMProviderService.__init__:Function]
# @PURPOSE: Initialize the service with database session.
# @PRE: db must be a valid SQLAlchemy Session.
# @POST: Service ready for provider operations.
def __init__(self, db: Session):
self.db = db
self.encryption = EncryptionManager()
# [/DEF:LLMProviderService.__init__:Function]
# [DEF:get_all_providers:Function]
# @TIER: STANDARD
# @PURPOSE: Returns all configured LLM providers.
# @PRE: Database connection must be active.
# @POST: Returns list of all LLMProvider records.
def get_all_providers(self) -> List[LLMProvider]:
with belief_scope("get_all_providers"):
return self.db.query(LLMProvider).all()
# [/DEF:get_all_providers:Function]
# [DEF:get_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Returns a single LLM provider by ID.
# @PRE: provider_id must be a valid string.
# @POST: Returns LLMProvider or None if not found.
def get_provider(self, provider_id: str) -> Optional[LLMProvider]:
with belief_scope("get_provider"):
return self.db.query(LLMProvider).filter(LLMProvider.id == provider_id).first()
# [/DEF:get_provider:Function]
# [DEF:create_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Creates a new LLM provider with encrypted API key.
# @PRE: config must contain valid provider configuration.
# @POST: New provider created and persisted to database.
def create_provider(self, config: LLMProviderConfig) -> LLMProvider:
with belief_scope("create_provider"):
encrypted_key = self.encryption.encrypt(config.api_key)
@@ -70,7 +101,10 @@ class LLMProviderService:
# [/DEF:create_provider:Function]
# [DEF:update_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Updates an existing LLM provider.
# @PRE: provider_id must exist, config must be valid.
# @POST: Provider updated and persisted to database.
def update_provider(self, provider_id: str, config: LLMProviderConfig) -> Optional[LLMProvider]:
with belief_scope("update_provider"):
db_provider = self.get_provider(provider_id)
@@ -92,7 +126,10 @@ class LLMProviderService:
# [/DEF:update_provider:Function]
# [DEF:delete_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Deletes an LLM provider.
# @PRE: provider_id must exist.
# @POST: Provider removed from database.
def delete_provider(self, provider_id: str) -> bool:
with belief_scope("delete_provider"):
db_provider = self.get_provider(provider_id)
@@ -104,7 +141,10 @@ class LLMProviderService:
# [/DEF:delete_provider:Function]
# [DEF:get_decrypted_api_key:Function]
# @TIER: STANDARD
# @PURPOSE: Returns the decrypted API key for a provider.
# @PRE: provider_id must exist with valid encrypted key.
# @POST: Returns decrypted API key or None on failure.
def get_decrypted_api_key(self, provider_id: str) -> Optional[str]:
with belief_scope("get_decrypted_api_key"):
db_provider = self.get_provider(provider_id)

View File

@@ -0,0 +1,251 @@
# [DEF:backend.src.services.resource_service:Module]
# @TIER: STANDARD
# @SEMANTICS: service, resources, dashboards, datasets, tasks, git
# @PURPOSE: Shared service for fetching resource data with Git status and task status
# @LAYER: Service
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
# @RELATION: DEPENDS_ON -> backend.src.core.task_manager
# @RELATION: DEPENDS_ON -> backend.src.services.git_service
# @INVARIANT: All resources include metadata about their current state
# [SECTION: IMPORTS]
from typing import List, Dict, Optional, Any
from ..core.superset_client import SupersetClient
from ..core.task_manager.models import Task
from ..services.git_service import GitService
from ..core.logger import logger, belief_scope
# [/SECTION]
# [DEF:ResourceService:Class]
# @PURPOSE: Provides centralized access to resource data with enhanced metadata
class ResourceService:
# [DEF:__init__:Function]
# @PURPOSE: Initialize the resource service with dependencies
# @PRE: None
# @POST: ResourceService is ready to fetch resources
def __init__(self):
with belief_scope("ResourceService.__init__"):
self.git_service = GitService()
logger.info("[ResourceService][Action] Initialized ResourceService")
# [/DEF:__init__:Function]
# [DEF:get_dashboards_with_status:Function]
# @PURPOSE: Fetch dashboards from environment with Git status and last task status
# @PRE: env is a valid Environment object
# @POST: Returns list of dashboards with enhanced metadata
# @PARAM: env (Environment) - The environment to fetch from
# @PARAM: tasks (List[Task]) - List of tasks to check for status
# @RETURN: List[Dict] - Dashboards with git_status and last_task fields
# @RELATION: CALLS -> SupersetClient.get_dashboards_summary
# @RELATION: CALLS -> self._get_git_status_for_dashboard
# @RELATION: CALLS -> self._get_last_task_for_resource
async def get_dashboards_with_status(
self,
env: Any,
tasks: Optional[List[Task]] = None
) -> List[Dict[str, Any]]:
with belief_scope("get_dashboards_with_status", f"env={env.id}"):
client = SupersetClient(env)
dashboards = client.get_dashboards_summary()
# Enhance each dashboard with Git status and task status
result = []
for dashboard in dashboards:
# dashboard is already a dict, no need to call .dict()
dashboard_dict = dashboard
dashboard_id = dashboard_dict.get('id')
# Get Git status if repo exists
git_status = self._get_git_status_for_dashboard(dashboard_id)
dashboard_dict['git_status'] = git_status
# Get last task status
last_task = self._get_last_task_for_resource(
f"dashboard-{dashboard_id}",
tasks
)
dashboard_dict['last_task'] = last_task
result.append(dashboard_dict)
logger.info(f"[ResourceService][Coherence:OK] Fetched {len(result)} dashboards with status")
return result
# [/DEF:get_dashboards_with_status:Function]
# [DEF:get_datasets_with_status:Function]
# @PURPOSE: Fetch datasets from environment with mapping progress and last task status
# @PRE: env is a valid Environment object
# @POST: Returns list of datasets with enhanced metadata
# @PARAM: env (Environment) - The environment to fetch from
# @PARAM: tasks (List[Task]) - List of tasks to check for status
# @RETURN: List[Dict] - Datasets with mapped_fields and last_task fields
# @RELATION: CALLS -> SupersetClient.get_datasets_summary
# @RELATION: CALLS -> self._get_last_task_for_resource
async def get_datasets_with_status(
self,
env: Any,
tasks: Optional[List[Task]] = None
) -> List[Dict[str, Any]]:
with belief_scope("get_datasets_with_status", f"env={env.id}"):
client = SupersetClient(env)
datasets = client.get_datasets_summary()
# Enhance each dataset with task status
result = []
for dataset in datasets:
# dataset is already a dict, no need to call .dict()
dataset_dict = dataset
dataset_id = dataset_dict.get('id')
# Get last task status
last_task = self._get_last_task_for_resource(
f"dataset-{dataset_id}",
tasks
)
dataset_dict['last_task'] = last_task
result.append(dataset_dict)
logger.info(f"[ResourceService][Coherence:OK] Fetched {len(result)} datasets with status")
return result
# [/DEF:get_datasets_with_status:Function]
# [DEF:get_activity_summary:Function]
# @PURPOSE: Get summary of active and recent tasks for the activity indicator
# @PRE: tasks is a list of Task objects
# @POST: Returns summary with active_count and recent_tasks
# @PARAM: tasks (List[Task]) - List of tasks to summarize
# @RETURN: Dict - Activity summary
def get_activity_summary(self, tasks: List[Task]) -> Dict[str, Any]:
with belief_scope("get_activity_summary"):
# Count active (RUNNING, WAITING_INPUT) tasks
active_tasks = [
t for t in tasks
if t.status in ['RUNNING', 'WAITING_INPUT']
]
# Get recent tasks (last 5)
recent_tasks = sorted(
tasks,
key=lambda t: t.created_at,
reverse=True
)[:5]
# Format recent tasks for frontend
recent_tasks_formatted = []
for task in recent_tasks:
resource_name = self._extract_resource_name_from_task(task)
recent_tasks_formatted.append({
'task_id': str(task.id),
'resource_name': resource_name,
'resource_type': self._extract_resource_type_from_task(task),
'status': task.status,
'started_at': task.created_at.isoformat() if task.created_at else None
})
return {
'active_count': len(active_tasks),
'recent_tasks': recent_tasks_formatted
}
# [/DEF:get_activity_summary:Function]
# [DEF:_get_git_status_for_dashboard:Function]
# @PURPOSE: Get Git sync status for a dashboard
# @PRE: dashboard_id is a valid integer
# @POST: Returns git status or None if no repo exists
# @PARAM: dashboard_id (int) - The dashboard ID
# @RETURN: Optional[Dict] - Git status with branch and sync_status
# @RELATION: CALLS -> GitService.get_repo
def _get_git_status_for_dashboard(self, dashboard_id: int) -> Optional[Dict[str, Any]]:
try:
repo = self.git_service.get_repo(dashboard_id)
if not repo:
return None
# Check if there are uncommitted changes
try:
# Get current branch
branch = repo.active_branch.name
# Check for uncommitted changes
is_dirty = repo.is_dirty()
# Check for unpushed commits
unpushed = len(list(repo.iter_commits(f'{branch}@{{u}}..{branch}'))) if '@{u}' in str(repo.refs) else 0
if is_dirty or unpushed > 0:
sync_status = 'DIFF'
else:
sync_status = 'OK'
return {
'branch': branch,
'sync_status': sync_status
}
except Exception:
logger.warning(f"[ResourceService][Warning] Failed to get git status for dashboard {dashboard_id}")
return None
except Exception:
# No repo exists for this dashboard
return None
# [/DEF:_get_git_status_for_dashboard:Function]
# [DEF:_get_last_task_for_resource:Function]
# @PURPOSE: Get the most recent task for a specific resource
# @PRE: resource_id is a valid string
# @POST: Returns task summary or None if no tasks found
# @PARAM: resource_id (str) - The resource identifier (e.g., "dashboard-123")
# @PARAM: tasks (Optional[List[Task]]) - List of tasks to search
# @RETURN: Optional[Dict] - Task summary with task_id and status
def _get_last_task_for_resource(
self,
resource_id: str,
tasks: Optional[List[Task]] = None
) -> Optional[Dict[str, Any]]:
if not tasks:
return None
# Filter tasks for this resource
resource_tasks = []
for task in tasks:
params = task.params or {}
if params.get('resource_id') == resource_id:
resource_tasks.append(task)
if not resource_tasks:
return None
# Get most recent task
last_task = max(resource_tasks, key=lambda t: t.created_at)
return {
'task_id': str(last_task.id),
'status': last_task.status
}
# [/DEF:_get_last_task_for_resource:Function]
# [DEF:_extract_resource_name_from_task:Function]
# @PURPOSE: Extract resource name from task params
# @PRE: task is a valid Task object
# @POST: Returns resource name or task ID
# @PARAM: task (Task) - The task to extract from
# @RETURN: str - Resource name or fallback
def _extract_resource_name_from_task(self, task: Task) -> str:
params = task.params or {}
return params.get('resource_name', f"Task {task.id}")
# [/DEF:_extract_resource_name_from_task:Function]
# [DEF:_extract_resource_type_from_task:Function]
# @PURPOSE: Extract resource type from task params
# @PRE: task is a valid Task object
# @POST: Returns resource type or 'unknown'
# @PARAM: task (Task) - The task to extract from
# @RETURN: str - Resource type
def _extract_resource_type_from_task(self, task: Task) -> str:
params = task.params or {}
return params.get('resource_type', 'unknown')
# [/DEF:_extract_resource_type_from_task:Function]
# [/DEF:ResourceService:Class]
# [/DEF:backend.src.services.resource_service:Module]

Binary file not shown.

View File

@@ -1,8 +1,6 @@
#!/usr/bin/env python3
"""Debug script to test Superset API authentication"""
import json
import requests
from pprint import pprint
from src.core.superset_client import SupersetClient
from src.core.config_manager import ConfigManager
@@ -53,7 +51,7 @@ def main():
print("\n--- Response Headers ---")
pprint(dict(ui_response.headers))
print(f"\n--- Response Content Preview (200 chars) ---")
print("\n--- Response Content Preview (200 chars) ---")
print(repr(ui_response.text[:200]))
if ui_response.status_code == 200:

View File

@@ -19,17 +19,17 @@ db = SessionLocal()
provider = db.query(LLMProvider).filter(LLMProvider.id == '6c899741-4108-4196-aea4-f38ad2f0150e').first()
if provider:
print(f"\nProvider found:")
print("\nProvider found:")
print(f" ID: {provider.id}")
print(f" Name: {provider.name}")
print(f" Encrypted API Key (first 50 chars): {provider.api_key[:50]}")
print(f" Encrypted API Key Length: {len(provider.api_key)}")
# Test decryption
print(f"\nAttempting decryption...")
print("\nAttempting decryption...")
try:
decrypted = fernet.decrypt(provider.api_key.encode()).decode()
print(f"Decryption successful!")
print("Decryption successful!")
print(f" Decrypted key length: {len(decrypted)}")
print(f" Decrypted key (first 8 chars): {decrypted[:8]}")
print(f" Decrypted key is empty: {len(decrypted) == 0}")

View File

@@ -1,5 +1,4 @@
import sys
import os
from pathlib import Path
# Add src to path
@@ -8,7 +7,7 @@ sys.path.append(str(Path(__file__).parent.parent / "src"))
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.core.database import Base, get_auth_db
from src.core.database import Base
from src.models.auth import User, Role, Permission, ADGroupMapping
from src.services.auth_service import AuthService
from src.core.auth.repository import AuthRepository

View File

@@ -0,0 +1,73 @@
# [DEF:backend.tests.test_dashboards_api:Module]
# @TIER: STANDARD
# @PURPOSE: Contract-driven tests for Dashboard Hub API
# @LAYER: Domain (Tests)
# @SEMANTICS: tests, dashboards, api, contract
# @RELATION: TESTS -> backend.src.api.routes.dashboards
from fastapi.testclient import TestClient
from unittest.mock import MagicMock, patch
from src.app import app
from src.api.routes.dashboards import DashboardsResponse
client = TestClient(app)
# [DEF:test_get_dashboards_success:Function]
# @TEST: GET /api/dashboards returns 200 and valid schema
# @PRE: env_id exists
# @POST: Response matches DashboardsResponse schema
def test_get_dashboards_success():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.get_resource_service") as mock_service, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
# Mock resource service response
mock_service.return_value.get_dashboards_with_status.return_value = [
{
"id": 1,
"title": "Sales Report",
"slug": "sales",
"git_status": {"branch": "main", "sync_status": "OK"},
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
}
]
# Mock permission
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards?env_id=prod")
assert response.status_code == 200
data = response.json()
assert "dashboards" in data
assert len(data["dashboards"]) == 1
assert data["dashboards"][0]["title"] == "Sales Report"
# Validate against Pydantic model
DashboardsResponse(**data)
# [/DEF:test_get_dashboards_success:Function]
# [DEF:test_get_dashboards_env_not_found:Function]
# @TEST: GET /api/dashboards returns 404 if env_id missing
# @PRE: env_id does not exist
# @POST: Returns 404 error
def test_get_dashboards_env_not_found():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
mock_config.return_value.get_environments.return_value = []
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards?env_id=nonexistent")
assert response.status_code == 404
assert "Environment not found" in response.json()["detail"]
# [/DEF:test_get_dashboards_env_not_found:Function]
# [/DEF:backend.tests.test_dashboards_api:Module]

View File

@@ -0,0 +1,395 @@
# [DEF:test_log_persistence:Module]
# @SEMANTICS: test, log, persistence, unit_test
# @PURPOSE: Unit tests for TaskLogPersistenceService.
# @LAYER: Test
# @RELATION: TESTS -> TaskLogPersistenceService
# @TIER: STANDARD
# [SECTION: IMPORTS]
from datetime import datetime
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.core.task_manager.persistence import TaskLogPersistenceService
from src.core.task_manager.models import LogEntry
# [/SECTION]
# [DEF:TestLogPersistence:Class]
# @PURPOSE: Test suite for TaskLogPersistenceService.
# @TIER: STANDARD
class TestLogPersistence:
# [DEF:setup_class:Function]
# @PURPOSE: Setup test database and service instance.
# @PRE: None.
# @POST: In-memory database and service instance created.
@classmethod
def setup_class(cls):
"""Create an in-memory database for testing."""
cls.engine = create_engine("sqlite:///:memory:")
cls.SessionLocal = sessionmaker(bind=cls.engine)
cls.service = TaskLogPersistenceService(cls.engine)
# [/DEF:setup_class:Function]
# [DEF:teardown_class:Function]
# @PURPOSE: Clean up test database.
# @PRE: None.
# @POST: Database disposed.
@classmethod
def teardown_class(cls):
"""Dispose of the database engine."""
cls.engine.dispose()
# [/DEF:teardown_class:Function]
# [DEF:setup_method:Function]
# @PURPOSE: Setup for each test method.
# @PRE: None.
# @POST: Fresh database session created.
def setup_method(self):
"""Create a new session for each test."""
self.session = self.SessionLocal()
# [/DEF:setup_method:Function]
# [DEF:teardown_method:Function]
# @PURPOSE: Cleanup after each test method.
# @PRE: None.
# @POST: Session closed and rolled back.
def teardown_method(self):
"""Close the session after each test."""
self.session.close()
# [/DEF:teardown_method:Function]
# [DEF:test_add_log_single:Function]
# @PURPOSE: Test adding a single log entry.
# @PRE: Service and session initialized.
# @POST: Log entry persisted to database.
def test_add_log_single(self):
"""Test adding a single log entry."""
entry = LogEntry(
task_id="test-task-1",
timestamp=datetime.now(),
level="INFO",
source="test_source",
message="Test message"
)
self.service.add_log(entry)
# Query the database
result = self.session.query(LogEntry).filter_by(task_id="test-task-1").first()
assert result is not None
assert result.level == "INFO"
assert result.source == "test_source"
assert result.message == "Test message"
# [/DEF:test_add_log_single:Function]
# [DEF:test_add_log_batch:Function]
# @PURPOSE: Test adding multiple log entries in batch.
# @PRE: Service and session initialized.
# @POST: All log entries persisted to database.
def test_add_log_batch(self):
"""Test adding multiple log entries in batch."""
entries = [
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="INFO",
source="source1",
message="Message 1"
),
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="WARNING",
source="source2",
message="Message 2"
),
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="ERROR",
source="source3",
message="Message 3"
),
]
self.service.add_logs(entries)
# Query the database
results = self.session.query(LogEntry).filter_by(task_id="test-task-2").all()
assert len(results) == 3
assert results[0].level == "INFO"
assert results[1].level == "WARNING"
assert results[2].level == "ERROR"
# [/DEF:test_add_log_batch:Function]
# [DEF:test_get_logs_by_task_id:Function]
# @PURPOSE: Test retrieving logs by task ID.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns logs for the specified task.
def test_get_logs_by_task_id(self):
"""Test retrieving logs by task ID."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-3",
timestamp=datetime.now(),
level="INFO",
source="source1",
message=f"Message {i}"
)
for i in range(5)
]
self.service.add_logs(entries)
# Retrieve logs
logs = self.service.get_logs("test-task-3")
assert len(logs) == 5
assert all(log.task_id == "test-task-3" for log in logs)
# [/DEF:test_get_logs_by_task_id:Function]
# [DEF:test_get_logs_with_filters:Function]
# @PURPOSE: Test retrieving logs with level and source filters.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns filtered logs.
def test_get_logs_with_filters(self):
"""Test retrieving logs with level and source filters."""
# Add test logs with different levels and sources
entries = [
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info message"
),
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="WARNING",
source="api",
message="Warning message"
),
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="ERROR",
source="storage",
message="Error message"
),
]
self.service.add_logs(entries)
# Test level filter
warning_logs = self.service.get_logs("test-task-4", level="WARNING")
assert len(warning_logs) == 1
assert warning_logs[0].level == "WARNING"
# Test source filter
api_logs = self.service.get_logs("test-task-4", source="api")
assert len(api_logs) == 2
assert all(log.source == "api" for log in api_logs)
# Test combined filters
api_warning_logs = self.service.get_logs("test-task-4", level="WARNING", source="api")
assert len(api_warning_logs) == 1
# [/DEF:test_get_logs_with_filters:Function]
# [DEF:test_get_logs_with_pagination:Function]
# @PURPOSE: Test retrieving logs with pagination.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns paginated logs.
def test_get_logs_with_pagination(self):
"""Test retrieving logs with pagination."""
# Add 15 test logs
entries = [
LogEntry(
task_id="test-task-5",
timestamp=datetime.now(),
level="INFO",
source="test",
message=f"Message {i}"
)
for i in range(15)
]
self.service.add_logs(entries)
# Test first page
page1 = self.service.get_logs("test-task-5", limit=10, offset=0)
assert len(page1) == 10
# Test second page
page2 = self.service.get_logs("test-task-5", limit=10, offset=10)
assert len(page2) == 5
# [/DEF:test_get_logs_with_pagination:Function]
# [DEF:test_get_logs_with_search:Function]
# @PURPOSE: Test retrieving logs with search query.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns logs matching search query.
def test_get_logs_with_search(self):
"""Test retrieving logs with search query."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="INFO",
source="api",
message="User authentication successful"
),
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="ERROR",
source="api",
message="Failed to connect to database"
),
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="INFO",
source="storage",
message="File saved successfully"
),
]
self.service.add_logs(entries)
# Test search for "authentication"
auth_logs = self.service.get_logs("test-task-6", search="authentication")
assert len(auth_logs) == 1
assert "authentication" in auth_logs[0].message.lower()
# Test search for "failed"
failed_logs = self.service.get_logs("test-task-6", search="failed")
assert len(failed_logs) == 1
assert "failed" in failed_logs[0].message.lower()
# [/DEF:test_get_logs_with_search:Function]
# [DEF:test_get_log_stats:Function]
# @PURPOSE: Test retrieving log statistics.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns statistics grouped by level and source.
def test_get_log_stats(self):
"""Test retrieving log statistics."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info 1"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info 2"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="WARNING",
source="api",
message="Warning 1"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="ERROR",
source="storage",
message="Error 1"
),
]
self.service.add_logs(entries)
# Get stats
stats = self.service.get_log_stats("test-task-7")
assert stats is not None
assert stats["by_level"]["INFO"] == 2
assert stats["by_level"]["WARNING"] == 1
assert stats["by_level"]["ERROR"] == 1
assert stats["by_source"]["api"] == 3
assert stats["by_source"]["storage"] == 1
# [/DEF:test_get_log_stats:Function]
# [DEF:test_get_log_sources:Function]
# @PURPOSE: Test retrieving unique log sources.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns list of unique sources.
def test_get_log_sources(self):
"""Test retrieving unique log sources."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Message 1"
),
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="storage",
message="Message 2"
),
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="git",
message="Message 3"
),
]
self.service.add_logs(entries)
# Get sources
sources = self.service.get_log_sources("test-task-8")
assert len(sources) == 3
assert "api" in sources
assert "storage" in sources
assert "git" in sources
# [/DEF:test_get_log_sources:Function]
# [DEF:test_delete_logs_by_task_id:Function]
# @PURPOSE: Test deleting logs by task ID.
# @PRE: Service and session initialized, logs exist.
# @POST: Logs for the task are deleted.
def test_delete_logs_by_task_id(self):
"""Test deleting logs by task ID."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-9",
timestamp=datetime.now(),
level="INFO",
source="test",
message=f"Message {i}"
)
for i in range(3)
]
self.service.add_logs(entries)
# Verify logs exist
logs_before = self.service.get_logs("test-task-9")
assert len(logs_before) == 3
# Delete logs
self.service.delete_logs("test-task-9")
# Verify logs are deleted
logs_after = self.service.get_logs("test-task-9")
assert len(logs_after) == 0
# [/DEF:test_delete_logs_by_task_id:Function]
# [/DEF:TestLogPersistence:Class]
# [/DEF:test_log_persistence:Module]

View File

@@ -1,14 +1,29 @@
import pytest
from src.core.logger import belief_scope, logger
from src.core.logger import (
belief_scope,
logger,
configure_logger,
get_task_log_level,
should_log_task_level
)
from src.core.config_models import LoggingConfig
# [DEF:test_belief_scope_logs_entry_action_exit:Function]
# @PURPOSE: Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs.
# @PRE: belief_scope is available. caplog fixture is used.
# @POST: Logs are verified to contain Entry, Action, and Exit tags.
def test_belief_scope_logs_entry_action_exit(caplog):
"""Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs."""
caplog.set_level("INFO")
# [DEF:test_belief_scope_logs_entry_action_exit_at_debug:Function]
# @PURPOSE: Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs at DEBUG level.
# @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Entry, Action, and Exit tags at DEBUG level.
def test_belief_scope_logs_entry_action_exit_at_debug(caplog):
"""Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs at DEBUG level."""
# Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("TestFunction"):
logger.info("Doing something important")
@@ -19,16 +34,28 @@ def test_belief_scope_logs_entry_action_exit(caplog):
assert any("[TestFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
assert any("[TestFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found"
assert any("[TestFunction][Exit]" in msg for msg in log_messages), "Exit log not found"
# [/DEF:test_belief_scope_logs_entry_action_exit:Function]
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_logs_entry_action_exit_at_debug:Function]
# [DEF:test_belief_scope_error_handling:Function]
# @PURPOSE: Test that belief_scope logs Coherence:Failed on exception.
# @PRE: belief_scope is available. caplog fixture is used.
# @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Coherence:Failed tag.
def test_belief_scope_error_handling(caplog):
"""Test that belief_scope logs Coherence:Failed on exception."""
caplog.set_level("INFO")
# Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with pytest.raises(ValueError):
with belief_scope("FailingFunction"):
@@ -39,16 +66,28 @@ def test_belief_scope_error_handling(caplog):
assert any("[FailingFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
assert any("[FailingFunction][Coherence:Failed]" in msg for msg in log_messages), "Failed coherence log not found"
# Exit should not be logged on failure
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_error_handling:Function]
# [DEF:test_belief_scope_success_coherence:Function]
# @PURPOSE: Test that belief_scope logs Coherence:OK on success.
# @PRE: belief_scope is available. caplog fixture is used.
# @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Coherence:OK tag.
def test_belief_scope_success_coherence(caplog):
"""Test that belief_scope logs Coherence:OK on success."""
caplog.set_level("INFO")
# Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("SuccessFunction"):
pass
@@ -56,4 +95,119 @@ def test_belief_scope_success_coherence(caplog):
log_messages = [record.message for record in caplog.records]
assert any("[SuccessFunction][Coherence:OK]" in msg for msg in log_messages), "Success coherence log not found"
# [/DEF:test_belief_scope_success_coherence:Function]
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_success_coherence:Function]
# [DEF:test_belief_scope_not_visible_at_info:Function]
# @PURPOSE: Test that belief_scope Entry/Exit/Coherence logs are NOT visible at INFO level.
# @PRE: belief_scope is available. caplog fixture is used.
# @POST: Entry/Exit/Coherence logs are not captured at INFO level.
def test_belief_scope_not_visible_at_info(caplog):
"""Test that belief_scope Entry/Exit/Coherence logs are NOT visible at INFO level."""
caplog.set_level("INFO")
with belief_scope("InfoLevelFunction"):
logger.info("Doing something important")
log_messages = [record.message for record in caplog.records]
# Action log should be visible
assert any("[InfoLevelFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found"
# Entry/Exit/Coherence should NOT be visible at INFO level
assert not any("[InfoLevelFunction][Entry]" in msg for msg in log_messages), "Entry log should not be visible at INFO"
assert not any("[InfoLevelFunction][Exit]" in msg for msg in log_messages), "Exit log should not be visible at INFO"
assert not any("[InfoLevelFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence log should not be visible at INFO"
# [/DEF:test_belief_scope_not_visible_at_info:Function]
# [DEF:test_task_log_level_default:Function]
# @PURPOSE: Test that default task log level is INFO.
# @PRE: None.
# @POST: Default level is INFO.
def test_task_log_level_default():
"""Test that default task log level is INFO."""
level = get_task_log_level()
assert level == "INFO"
# [/DEF:test_task_log_level_default:Function]
# [DEF:test_should_log_task_level:Function]
# @PURPOSE: Test that should_log_task_level correctly filters log levels.
# @PRE: None.
# @POST: Filtering works correctly for all level combinations.
def test_should_log_task_level():
"""Test that should_log_task_level correctly filters log levels."""
# Default level is INFO
assert should_log_task_level("ERROR") is True, "ERROR should be logged at INFO threshold"
assert should_log_task_level("WARNING") is True, "WARNING should be logged at INFO threshold"
assert should_log_task_level("INFO") is True, "INFO should be logged at INFO threshold"
assert should_log_task_level("DEBUG") is False, "DEBUG should NOT be logged at INFO threshold"
# [/DEF:test_should_log_task_level:Function]
# [DEF:test_configure_logger_task_log_level:Function]
# @PURPOSE: Test that configure_logger updates task_log_level.
# @PRE: LoggingConfig is available.
# @POST: task_log_level is updated correctly.
def test_configure_logger_task_log_level():
"""Test that configure_logger updates task_log_level."""
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
assert get_task_log_level() == "DEBUG", "task_log_level should be DEBUG"
assert should_log_task_level("DEBUG") is True, "DEBUG should be logged at DEBUG threshold"
# Reset to INFO
config = LoggingConfig(
level="INFO",
task_log_level="INFO",
enable_belief_state=True
)
configure_logger(config)
assert get_task_log_level() == "INFO", "task_log_level should be reset to INFO"
# [/DEF:test_configure_logger_task_log_level:Function]
# [DEF:test_enable_belief_state_flag:Function]
# @PURPOSE: Test that enable_belief_state flag controls belief_scope logging.
# @PRE: LoggingConfig is available. caplog fixture is used.
# @POST: belief_scope logs are controlled by the flag.
def test_enable_belief_state_flag(caplog):
"""Test that enable_belief_state flag controls belief_scope logging."""
# Disable belief state
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=False
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("DisabledFunction"):
logger.info("Doing something")
log_messages = [record.message for record in caplog.records]
# Entry and Exit should NOT be logged when disabled
assert not any("[DisabledFunction][Entry]" in msg for msg in log_messages), "Entry should not be logged when disabled"
assert not any("[DisabledFunction][Exit]" in msg for msg in log_messages), "Exit should not be logged when disabled"
# Coherence:OK should still be logged (internal tracking)
assert any("[DisabledFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence should still be logged"
# Re-enable for other tests
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
# [/DEF:test_enable_belief_state_flag:Function]

View File

@@ -1,4 +1,3 @@
import pytest
from src.core.config_models import Environment
from src.core.logger import belief_scope

View File

@@ -0,0 +1,123 @@
import pytest
from fastapi.testclient import TestClient
from unittest.mock import MagicMock
from src.app import app
from src.dependencies import get_config_manager, get_task_manager, get_resource_service, has_permission
client = TestClient(app)
# [DEF:test_dashboards_api:Test]
# @PURPOSE: Verify GET /api/dashboards contract compliance
# @TEST: Valid env_id returns 200 and dashboard list
# @TEST: Invalid env_id returns 404
# @TEST: Search filter works
@pytest.fixture
def mock_deps():
config_manager = MagicMock()
task_manager = MagicMock()
resource_service = MagicMock()
# Mock environment
env = MagicMock()
env.id = "env1"
config_manager.get_environments.return_value = [env]
# Mock tasks
task_manager.get_all_tasks.return_value = []
# Mock dashboards
resource_service.get_dashboards_with_status.return_value = [
{"id": 1, "title": "Sales", "slug": "sales", "git_status": {"branch": "main", "sync_status": "OK"}, "last_task": None},
{"id": 2, "title": "Marketing", "slug": "mkt", "git_status": None, "last_task": {"task_id": "t1", "status": "SUCCESS"}}
]
app.dependency_overrides[get_config_manager] = lambda: config_manager
app.dependency_overrides[get_task_manager] = lambda: task_manager
app.dependency_overrides[get_resource_service] = lambda: resource_service
# Bypass permission check
mock_user = MagicMock()
mock_user.username = "testadmin"
# Override both get_current_user and has_permission
from src.dependencies import get_current_user
app.dependency_overrides[get_current_user] = lambda: mock_user
# We need to override the specific instance returned by has_permission
app.dependency_overrides[has_permission("plugin:migration", "READ")] = lambda: mock_user
yield {
"config": config_manager,
"task": task_manager,
"resource": resource_service
}
app.dependency_overrides.clear()
def test_get_dashboards_success(mock_deps):
response = client.get("/api/dashboards?env_id=env1")
assert response.status_code == 200
data = response.json()
assert "dashboards" in data
assert len(data["dashboards"]) == 2
assert data["dashboards"][0]["title"] == "Sales"
assert data["dashboards"][0]["git_status"]["sync_status"] == "OK"
def test_get_dashboards_not_found(mock_deps):
response = client.get("/api/dashboards?env_id=invalid")
assert response.status_code == 404
def test_get_dashboards_search(mock_deps):
response = client.get("/api/dashboards?env_id=env1&search=Sales")
assert response.status_code == 200
data = response.json()
assert len(data["dashboards"]) == 1
assert data["dashboards"][0]["title"] == "Sales"
# [/DEF:test_dashboards_api:Test]
# [DEF:test_datasets_api:Test]
# @PURPOSE: Verify GET /api/datasets contract compliance
# @TEST: Valid env_id returns 200 and dataset list
# @TEST: Invalid env_id returns 404
# @TEST: Search filter works
# @TEST: Negative - Service failure returns 503
def test_get_datasets_success(mock_deps):
mock_deps["resource"].get_datasets_with_status.return_value = [
{"id": 1, "table_name": "orders", "schema": "public", "database": "db1", "mapped_fields": {"total": 10, "mapped": 5}, "last_task": None}
]
response = client.get("/api/datasets?env_id=env1")
assert response.status_code == 200
data = response.json()
assert "datasets" in data
assert len(data["datasets"]) == 1
assert data["datasets"][0]["table_name"] == "orders"
assert data["datasets"][0]["mapped_fields"]["mapped"] == 5
def test_get_datasets_not_found(mock_deps):
response = client.get("/api/datasets?env_id=invalid")
assert response.status_code == 404
def test_get_datasets_search(mock_deps):
mock_deps["resource"].get_datasets_with_status.return_value = [
{"id": 1, "table_name": "orders", "schema": "public", "database": "db1", "mapped_fields": {"total": 10, "mapped": 5}, "last_task": None},
{"id": 2, "table_name": "users", "schema": "public", "database": "db1", "mapped_fields": {"total": 5, "mapped": 5}, "last_task": None}
]
response = client.get("/api/datasets?env_id=env1&search=orders")
assert response.status_code == 200
data = response.json()
assert len(data["datasets"]) == 1
assert data["datasets"][0]["table_name"] == "orders"
def test_get_datasets_service_failure(mock_deps):
mock_deps["resource"].get_datasets_with_status.side_effect = Exception("Superset down")
response = client.get("/api/datasets?env_id=env1")
assert response.status_code == 503
assert "Failed to fetch datasets" in response.json()["detail"]
# [/DEF:test_datasets_api:Test]

View File

@@ -0,0 +1,374 @@
# [DEF:test_task_logger:Module]
# @SEMANTICS: test, task_logger, task_context, unit_test
# @PURPOSE: Unit tests for TaskLogger and TaskContext.
# @LAYER: Test
# @RELATION: TESTS -> TaskLogger, TaskContext
# @TIER: STANDARD
# [SECTION: IMPORTS]
from unittest.mock import Mock
from src.core.task_manager.task_logger import TaskLogger
from src.core.task_manager.context import TaskContext
# [/SECTION]
# [DEF:TestTaskLogger:Class]
# @PURPOSE: Test suite for TaskLogger.
# @TIER: STANDARD
class TestTaskLogger:
# [DEF:setup_method:Function]
# @PURPOSE: Setup for each test method.
# @PRE: None.
# @POST: Mock add_log_fn created.
def setup_method(self):
"""Create a mock add_log function for testing."""
self.mock_add_log = Mock()
self.logger = TaskLogger(
task_id="test-task-1",
add_log_fn=self.mock_add_log,
source="test_source"
)
# [/DEF:setup_method:Function]
# [DEF:test_init:Function]
# @PURPOSE: Test TaskLogger initialization.
# @PRE: None.
# @POST: Logger instance created with correct attributes.
def test_init(self):
"""Test TaskLogger initialization."""
assert self.logger._task_id == "test-task-1"
assert self.logger._default_source == "test_source"
assert self.logger._add_log == self.mock_add_log
# [/DEF:test_init:Function]
# [DEF:test_with_source:Function]
# @PURPOSE: Test creating a sub-logger with different source.
# @PRE: Logger initialized.
# @POST: New logger created with different source but same task_id.
def test_with_source(self):
"""Test creating a sub-logger with different source."""
sub_logger = self.logger.with_source("new_source")
assert sub_logger._task_id == "test-task-1"
assert sub_logger._default_source == "new_source"
assert sub_logger._add_log == self.mock_add_log
# [/DEF:test_with_source:Function]
# [DEF:test_debug:Function]
# @PURPOSE: Test debug log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with DEBUG level.
def test_debug(self):
"""Test debug logging."""
self.logger.debug("Debug message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="DEBUG",
message="Debug message",
source="test_source",
metadata=None
)
# [/DEF:test_debug:Function]
# [DEF:test_info:Function]
# @PURPOSE: Test info log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with INFO level.
def test_info(self):
"""Test info logging."""
self.logger.info("Info message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="INFO",
message="Info message",
source="test_source",
metadata=None
)
# [/DEF:test_info:Function]
# [DEF:test_warning:Function]
# @PURPOSE: Test warning log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with WARNING level.
def test_warning(self):
"""Test warning logging."""
self.logger.warning("Warning message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="WARNING",
message="Warning message",
source="test_source",
metadata=None
)
# [/DEF:test_warning:Function]
# [DEF:test_error:Function]
# @PURPOSE: Test error log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with ERROR level.
def test_error(self):
"""Test error logging."""
self.logger.error("Error message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="ERROR",
message="Error message",
source="test_source",
metadata=None
)
# [/DEF:test_error:Function]
# [DEF:test_error_with_metadata:Function]
# @PURPOSE: Test error logging with metadata.
# @PRE: Logger initialized.
# @POST: add_log_fn called with ERROR level and metadata.
def test_error_with_metadata(self):
"""Test error logging with metadata."""
metadata = {"error_code": 500, "details": "Connection failed"}
self.logger.error("Error message", metadata=metadata)
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="ERROR",
message="Error message",
source="test_source",
metadata=metadata
)
# [/DEF:test_error_with_metadata:Function]
# [DEF:test_progress:Function]
# @PURPOSE: Test progress logging.
# @PRE: Logger initialized.
# @POST: add_log_fn called with INFO level and progress metadata.
def test_progress(self):
"""Test progress logging."""
self.logger.progress("Processing items", percent=50)
expected_metadata = {"progress": 50}
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="INFO",
message="Processing items",
source="test_source",
metadata=expected_metadata
)
# [/DEF:test_progress:Function]
# [DEF:test_progress_clamping:Function]
# @PURPOSE: Test progress value clamping (0-100).
# @PRE: Logger initialized.
# @POST: Progress values clamped to 0-100 range.
def test_progress_clamping(self):
"""Test progress value clamping."""
# Test below 0
self.logger.progress("Below 0", percent=-10)
call1 = self.mock_add_log.call_args_list[0]
assert call1.kwargs["metadata"]["progress"] == 0
self.mock_add_log.reset_mock()
# Test above 100
self.logger.progress("Above 100", percent=150)
call2 = self.mock_add_log.call_args_list[0]
assert call2.kwargs["metadata"]["progress"] == 100
# [/DEF:test_progress_clamping:Function]
# [DEF:test_source_override:Function]
# @PURPOSE: Test overriding the default source.
# @PRE: Logger initialized.
# @POST: add_log_fn called with overridden source.
def test_source_override(self):
"""Test overriding the default source."""
self.logger.info("Message", source="override_source")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="INFO",
message="Message",
source="override_source",
metadata=None
)
# [/DEF:test_source_override:Function]
# [DEF:test_sub_logger_source_independence:Function]
# @PURPOSE: Test sub-logger independence from parent.
# @PRE: Logger and sub-logger initialized.
# @POST: Sub-logger has different source, parent unchanged.
def test_sub_logger_source_independence(self):
"""Test sub-logger source independence from parent."""
sub_logger = self.logger.with_source("sub_source")
# Log with parent
self.logger.info("Parent message")
# Log with sub-logger
sub_logger.info("Sub message")
# Verify both calls were made with correct sources
calls = self.mock_add_log.call_args_list
assert len(calls) == 2
assert calls[0].kwargs["source"] == "test_source"
assert calls[1].kwargs["source"] == "sub_source"
# [/DEF:test_sub_logger_source_independence:Function]
# [/DEF:TestTaskLogger:Class]
# [DEF:TestTaskContext:Class]
# @PURPOSE: Test suite for TaskContext.
# @TIER: STANDARD
class TestTaskContext:
# [DEF:setup_method:Function]
# @PURPOSE: Setup for each test method.
# @PRE: None.
# @POST: Mock add_log_fn created.
def setup_method(self):
"""Create a mock add_log function for testing."""
self.mock_add_log = Mock()
self.params = {"param1": "value1", "param2": "value2"}
self.context = TaskContext(
task_id="test-task-2",
add_log_fn=self.mock_add_log,
params=self.params,
default_source="plugin"
)
# [/DEF:setup_method:Function]
# [DEF:test_init:Function]
# @PURPOSE: Test TaskContext initialization.
# @PRE: None.
# @POST: Context instance created with correct attributes.
def test_init(self):
"""Test TaskContext initialization."""
assert self.context._task_id == "test-task-2"
assert self.context._params == self.params
assert isinstance(self.context._logger, TaskLogger)
assert self.context._logger._default_source == "plugin"
# [/DEF:test_init:Function]
# [DEF:test_task_id_property:Function]
# @PURPOSE: Test task_id property.
# @PRE: Context initialized.
# @POST: Returns correct task_id.
def test_task_id_property(self):
"""Test task_id property."""
assert self.context.task_id == "test-task-2"
# [/DEF:test_task_id_property:Function]
# [DEF:test_logger_property:Function]
# @PURPOSE: Test logger property.
# @PRE: Context initialized.
# @POST: Returns TaskLogger instance.
def test_logger_property(self):
"""Test logger property."""
logger = self.context.logger
assert isinstance(logger, TaskLogger)
assert logger._task_id == "test-task-2"
assert logger._default_source == "plugin"
# [/DEF:test_logger_property:Function]
# [DEF:test_params_property:Function]
# @PURPOSE: Test params property.
# @PRE: Context initialized.
# @POST: Returns correct params dict.
def test_params_property(self):
"""Test params property."""
assert self.context.params == self.params
# [/DEF:test_params_property:Function]
# [DEF:test_get_param:Function]
# @PURPOSE: Test getting a specific parameter.
# @PRE: Context initialized with params.
# @POST: Returns parameter value or default.
def test_get_param(self):
"""Test getting a specific parameter."""
assert self.context.get_param("param1") == "value1"
assert self.context.get_param("param2") == "value2"
assert self.context.get_param("nonexistent") is None
assert self.context.get_param("nonexistent", "default") == "default"
# [/DEF:test_get_param:Function]
# [DEF:test_create_sub_context:Function]
# @PURPOSE: Test creating a sub-context with different source.
# @PRE: Context initialized.
# @POST: New context created with different logger source.
def test_create_sub_context(self):
"""Test creating a sub-context with different source."""
sub_context = self.context.create_sub_context("new_source")
assert sub_context._task_id == "test-task-2"
assert sub_context._params == self.params
assert sub_context._logger._default_source == "new_source"
assert sub_context._logger._task_id == "test-task-2"
# [/DEF:test_create_sub_context:Function]
# [DEF:test_context_logger_delegates_to_task_logger:Function]
# @PURPOSE: Test context logger delegates to TaskLogger.
# @PRE: Context initialized.
# @POST: Logger calls are delegated to TaskLogger.
def test_context_logger_delegates_to_task_logger(self):
"""Test context logger delegates to TaskLogger."""
# Call through context
self.context.logger.info("Test message")
# Verify the mock was called
self.mock_add_log.assert_called_once_with(
task_id="test-task-2",
level="INFO",
message="Test message",
source="plugin",
metadata=None
)
# [/DEF:test_context_logger_delegates_to_task_logger:Function]
# [DEF:test_sub_context_with_source:Function]
# @PURPOSE: Test sub-context logger uses new source.
# @PRE: Context initialized.
# @POST: Sub-context logger uses new source.
def test_sub_context_with_source(self):
"""Test sub-context logger uses new source."""
sub_context = self.context.create_sub_context("api_source")
# Log through sub-context
sub_context.logger.info("API message")
# Verify the mock was called with new source
self.mock_add_log.assert_called_once_with(
task_id="test-task-2",
level="INFO",
message="API message",
source="api_source",
metadata=None
)
# [/DEF:test_sub_context_with_source:Function]
# [DEF:test_multiple_sub_contexts:Function]
# @PURPOSE: Test creating multiple sub-contexts.
# @PRE: Context initialized.
# @POST: Each sub-context has independent logger source.
def test_multiple_sub_contexts(self):
"""Test creating multiple sub-contexts."""
sub1 = self.context.create_sub_context("source1")
sub2 = self.context.create_sub_context("source2")
sub3 = self.context.create_sub_context("source3")
assert sub1._logger._default_source == "source1"
assert sub2._logger._default_source == "source2"
assert sub3._logger._default_source == "source3"
# All should have same task_id and params
assert sub1._task_id == "test-task-2"
assert sub2._task_id == "test-task-2"
assert sub3._task_id == "test-task-2"
assert sub1._params == self.params
assert sub2._params == self.params
assert sub3._params == self.params
# [/DEF:test_multiple_sub_contexts:Function]
# [/DEF:TestTaskContext:Class]
# [/DEF:test_task_logger:Module]

View File

@@ -0,0 +1,145 @@
# Design Document: Resource-Centric UI & Unified Task Experience
## 1. Core Philosophy
The application moves from a **Task-Centric** model (where users navigate to "Migration Tool" or "Git Tool") to a **Resource-Centric** model. Users navigate to the object they want to manage (Dashboard, Dataset) and perform actions on it.
**Goals:**
1. **Context preservation:** Users shouldn't lose their place in a list just to see a log.
2. **Discoverability:** All actions available for a resource are grouped together.
3. **Traceability:** Every action is explicitly linked to a Task ID with accessible logs.
---
## 2. Navigation Structure (Navbar)
**Old Menu:**
`[Home] [Migration] [Git Manager] [Mapper] [Settings] [Logout]`
**New Menu:**
`[Superset Manager] [Dashboards] [Datasets] [Storage] | [Activity (0)] [Settings] [User]`
* **Dashboards**: Main hub for all dashboard operations (Migrate, Backup, Git).
* **Datasets**: Hub for dataset documentation and mapping.
* **Storage**: File management (Backups, Repositories).
* **Activity**: Global indicator of running tasks. Clicking it opens the Task Drawer.
---
## 3. Page Layouts
### 3.1. Dashboard Hub (`/dashboards`)
The central place for managing Superset Dashboards.
**Wireframe:**
```text
+-----------------------------------------------------------------------+
| Select Source Env: [ Development (v) ] [ Refresh ] |
+-----------------------------------------------------------------------+
| Search: [ Filter by title... ] |
+-----------------------------------------------------------------------+
| Title | Slug | Git Status | Last Task | Actions |
|------------------|-------------|---------------|-----------|----------|
| Sales Report | sales-2023 | 🌿 main (OK) | (v) Done | [ ... ] |
| HR Analytics | hr-dash | - | ( ) Idle | [ ... ] |
| Logs Monitor | logs-v2 | 🌿 dev (Diff) | (@) Run.. | [ ... ] |
+-----------------------------------------------------------------------+
```
**Interaction Details:**
1. **Source Env Selector**: Loads dashboards via `superset_client.get_dashboards`.
2. **Status Column ("Last Task")**:
* Shows the status of the *last known action* for this dashboard in the current session.
* **States**: `Idle`, `Running` (Spinner), `Waiting Input` (Orange Key), `Success` (Green Check), `Error` (Red X).
* **Click Action**: Clicking the icon/badge opens the **Task Drawer**.
3. **Actions Menu ([ ... ])**:
* **Migrate**: Opens `DeploymentModal` (Simplified: just Target Env selector).
* **Backup**: Immediately triggers `BackupPlugin`.
* **Git Operations**:
* *Init Repo* (if Git Status is empty).
* *Commit/Push*, *History*, *Checkout* (if Git initialized).
* **Validate**: Triggers LLM Analysis.
### 3.2. Dataset Hub (`/datasets`)
The central place for managing physical datasets and semantic layers.
**Wireframe:**
```text
+-----------------------------------------------------------------------+
| Select Source Env: [ Production (v) ] |
+-----------------------------------------------------------------------+
| Table Name | Schema | Mapped Fields | Last Task | Actions |
|------------------|-------------|---------------|-----------|----------|
| fact_orders | public | 15 / 20 | (v) Done | [ ... ] |
| dim_users | auth | 0 / 5 | ( ) Idle | [ ... ] |
+-----------------------------------------------------------------------+
```
**Actions Menu ([ ... ])**:
* **Map Columns**: Opens the Mapping Modal (replaces `MapperPage`).
* **Generate Docs**: Triggers `DocumentationPlugin`.
---
## 4. The Global Task Drawer
**Concept:** A slide-out panel that overlays the right side of the screen. It persists in the DOM layout (Global Layout) but is hidden until triggered.
**Trigger Points:**
1. Clicking a **Status Badge** in any Grid row (Dashboard or Dataset).
2. Clicking the **Activity** indicator in the Navbar.
**Layout:**
```text
+---------------------------------------------------------------+
| Task: Migration "Sales Report" [X] Close |
| ID: 1234-5678-uuid |
| Status: WAITING_INPUT (Paused) |
+---------------------------------------------------------------+
| |
| [Log Stream Area] |
| 10:00:01 [INFO] Starting migration... |
| 10:00:02 [INFO] Exporting dashboard... |
| 10:00:05 [WARN] Target DB requires password! |
| |
+---------------------------------------------------------------+
| INTERACTIVE AREA (Dynamic) |
| |
| Target Database: "Production DB" |
| Enter Password: [ ********** ] |
| |
| [ Cancel ] [ Resume Task ] |
+---------------------------------------------------------------+
```
**Behavior:**
* **Context Aware**: If I trigger a migration on "Sales Report", the Drawer automatically opens and subscribes to that task's ID.
* **Multi-Tasking**: I can close the drawer (click [X]) to let the task run in the background. The "Activity" badge in the navbar increments.
* **Input Handling**: Components like `PasswordPrompt` or `MissingMappingModal` are no longer center-screen modals blocking the whole UI. They are rendered *inside* the Interactive Area of the Drawer.
---
## 5. Technical Component Architecture
### 5.1. Stores (`stores/tasks.js`)
Needs a new reactive store structure to map Resources to Tasks.
```javascript
// Map resource UUIDs to their active/latest task UUIDs
export const resourceTaskMap = writable({
"dashboard-uuid-1": { taskId: "task-uuid-A", status: "RUNNING" },
"dataset-uuid-2": { taskId: "task-uuid-B", status: "SUCCESS" }
});
// The currently focused task in the Drawer
export const activeDrawerTask = writable(null); // { taskId: "..." }
export const isDrawerOpen = writable(false);
```
### 5.2. Components
* `DashboardHub.svelte`: Main page.
* `DatasetHub.svelte`: Main page.
* `GlobalTaskDrawer.svelte`: Lives in `+layout.svelte`. Connects to `activeDrawerTask`.
* `ActionMenu.svelte`: Reusable dropdown for grids.

Some files were not shown because too many files have changed in this diff Show More