Compare commits

..

15 Commits

Author SHA1 Message Date
4a0273a604 измененные спеки таски 2026-02-10 15:53:38 +03:00
edb2dd5263 updated tasks 2026-02-10 15:04:43 +03:00
76b98fcf8f linter + новые таски 2026-02-10 12:53:01 +03:00
794cc55fe7 Таски готовы 2026-02-09 12:35:27 +03:00
235b0e3c9f semantic update 2026-02-08 22:53:54 +03:00
e6087bd3c1 таски готовы 2026-02-07 12:42:32 +03:00
0f16bab2b8 Похоже работает 2026-02-07 11:26:06 +03:00
7de96c17c4 feat(llm-plugin): switch to environment API for log retrieval
- Replace local backend.log reading with Superset API /log/ fetch
- Update DashboardValidationPlugin to use SupersetClient
- Filter logs by dashboard_id and last 24 hours
- Update spec FR-006 to reflect API usage
2026-02-06 17:57:25 +03:00
f018b97ed2 Semantic protocol update - add UX 2026-01-30 18:53:52 +03:00
72846aa835 tasks ux-reference 2026-01-30 13:35:03 +03:00
994c0c3e5d feat(speckit): integrate ux reference into workflows
Introduce a UX reference stage to ensure technical plans align with
user experience goals. Adds a new template, a generation step in the
specification workflow, and mandatory validation checks during
planning to prevent technical compromises from degrading the defined
user experience.
2026-01-30 12:31:19 +03:00
252a8601a9 Вроде работает 2026-01-30 11:10:16 +03:00
8044f85ea4 tasks and workflow updated 2026-01-29 10:06:28 +03:00
d4109e5a03 docs: amend constitution to v2.0.0 (delegate semantics to protocol + add async/testability principles) 2026-01-28 18:48:43 +03:00
b2bbd73439 tasks ready 2026-01-28 18:30:23 +03:00
168 changed files with 78327 additions and 62063 deletions

View File

@@ -32,6 +32,9 @@ Auto-generated from all feature plans. Last updated: 2025-12-19
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI (Backend), SvelteKit + Tailwind CSS (Frontend) (015-frontend-nav-redesign) - Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI (Backend), SvelteKit + Tailwind CSS (Frontend) (015-frontend-nav-redesign)
- N/A (UI reorganization and API integration) (015-frontend-nav-redesign) - N/A (UI reorganization and API integration) (015-frontend-nav-redesign)
- SQLite (`auth.db`) for Users, Roles, Permissions, and Mappings. (016-multi-user-auth) - SQLite (`auth.db`) for Users, Roles, Permissions, and Mappings. (016-multi-user-auth)
- SQLite (existing `tasks.db` for results, `auth.db` for permissions, `mappings.db` or new `plugins.db` for provider config/metadata) (017-llm-analysis-plugin)
- Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy, WebSocket (existing) (019-superset-ux-redesign)
- SQLite (tasks.db, auth.db, migrations.db) - no new database tables required (019-superset-ux-redesign)
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui) - Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
@@ -52,9 +55,9 @@ cd src; pytest; ruff check .
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
## Recent Changes ## Recent Changes
- 019-superset-ux-redesign: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy, WebSocket (existing)
- 017-llm-analysis-plugin: Added Python 3.9+ (Backend), Node.js 18+ (Frontend)
- 016-multi-user-auth: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) - 016-multi-user-auth: Added Python 3.9+ (Backend), Node.js 18+ (Frontend)
- 015-frontend-nav-redesign: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI (Backend), SvelteKit + Tailwind CSS (Frontend)
- 014-file-storage-ui: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI (Backend), SvelteKit (Frontend)
<!-- MANUAL ADDITIONS START --> <!-- MANUAL ADDITIONS START -->

View File

@@ -63,6 +63,7 @@ Load only the minimal necessary context from each artifact:
**From constitution:** **From constitution:**
- Load `.specify/memory/constitution.md` for principle validation - Load `.specify/memory/constitution.md` for principle validation
- Load `semantic_protocol.md` for technical standard validation
### 3. Build Semantic Models ### 3. Build Semantic Models

View File

@@ -1,5 +1,10 @@
--- ---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
handoffs:
- label: Verify Changes
agent: speckit.test
prompt: Verify the implementation of...
send: true
--- ---
## User Input ## User Input
@@ -46,6 +51,7 @@ You **MUST** consider the user input before proceeding (if not empty).
- Automatically proceed to step 3 - Automatically proceed to step 3
3. Load and analyze the implementation context: 3. Load and analyze the implementation context:
- **REQUIRED**: Read `semantic_protocol.md` for strict coding standards and contract requirements
- **REQUIRED**: Read tasks.md for the complete task list and execution plan - **REQUIRED**: Read tasks.md for the complete task list and execution plan
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure - **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
- **IF EXISTS**: Read data-model.md for entities and relationships - **IF EXISTS**: Read data-model.md for entities and relationships
@@ -111,6 +117,8 @@ You **MUST** consider the user input before proceeding (if not empty).
- **Validation checkpoints**: Verify each phase completion before proceeding - **Validation checkpoints**: Verify each phase completion before proceeding
7. Implementation execution rules: 7. Implementation execution rules:
- **Strict Adherence**: Apply `semantic_protocol.md` rules - every file must start with [DEF] header, include @TIER, and define contracts.
- **CRITICAL Contracts**: If a task description contains a contract summary (e.g., `CRITICAL: PRE: ..., POST: ...`), these constraints are **MANDATORY** and must be strictly implemented in the code using guards/assertions (if applicable per protocol).
- **Setup first**: Initialize project structure, dependencies, configuration - **Setup first**: Initialize project structure, dependencies, configuration
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios - **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
- **Core development**: Implement models, services, CLI commands, endpoints - **Core development**: Implement models, services, CLI commands, endpoints

View File

@@ -22,7 +22,7 @@ You **MUST** consider the user input before proceeding (if not empty).
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied). 2. **Load context**: Read FEATURE_SPEC, `ux_reference.md`, `semantic_protocol.md` and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to: 3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION") - Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
@@ -64,17 +64,32 @@ You **MUST** consider the user input before proceeding (if not empty).
**Prerequisites:** `research.md` complete **Prerequisites:** `research.md` complete
0. **Validate Design against UX Reference**:
- Check if the proposed architecture supports the latency, interactivity, and flow defined in `ux_reference.md`.
- **Linkage**: Ensure key UI states from `ux_reference.md` map to Component Contracts (`@UX_STATE`).
- **CRITICAL**: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you **MUST STOP** and warn the user.
1. **Extract entities from feature spec** → `data-model.md`: 1. **Extract entities from feature spec** → `data-model.md`:
- Entity name, fields, relationships - Entity name, fields, relationships, validation rules.
- Validation rules from requirements
- State transitions if applicable
2. **Generate API contracts** from functional requirements: 2. **Design & Verify Contracts (Semantic Protocol)**:
- For each user action → endpoint - **Drafting**: Define [DEF] Headers and Contracts for all new modules based on `semantic_protocol.md`.
- Use standard REST/GraphQL patterns - **TIER Classification**: Explicitly assign `@TIER: [CRITICAL|STANDARD|TRIVIAL]` to each module.
- Output OpenAPI/GraphQL schema to `/contracts/` - **CRITICAL Requirements**: For all CRITICAL modules, define full `@PRE`, `@POST`, and (if UI) `@UX_STATE` contracts.
- **Self-Review**:
- *Completeness*: Do `@PRE`/`@POST` cover edge cases identified in Research?
- *Connectivity*: Do `@RELATION` tags form a coherent graph?
- *Compliance*: Does syntax match `[DEF:id:Type]` exactly?
- **Output**: Write verified contracts to `contracts/modules.md`.
3. **Agent context update**: 3. **Simulate Contract Usage**:
- Trace one key user scenario through the defined contracts to ensure data flow continuity.
- If a contract interface mismatch is found, fix it immediately.
4. **Generate API contracts**:
- Output OpenAPI/GraphQL schema to `/contracts/` for backend-frontend sync.
5. **Agent context update**:
- Run `.specify/scripts/bash/update-agent-context.sh kilocode` - Run `.specify/scripts/bash/update-agent-context.sh kilocode`
- These scripts detect which AI agent is in use - These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file - Update the appropriate agent-specific context file

View File

@@ -70,7 +70,22 @@ Given that feature description, do this:
3. Load `.specify/templates/spec-template.md` to understand required sections. 3. Load `.specify/templates/spec-template.md` to understand required sections.
4. Follow this execution flow: 4. **Generate UX Reference**:
a. Load `.specify/templates/ux-reference-template.md`.
b. **Design the User Experience**:
- **Imagine you are the user**: Visualize the interface and interaction.
- **Persona**: Define who is using this.
- **Happy Path**: Write the story of the perfect interaction.
- **Mockups**: Create concrete CLI text blocks or UI descriptions.
- **Errors**: Define how the system guides the user out of failure.
c. Write the `ux_reference.md` file in the feature directory.
d. **CRITICAL**: This UX Reference is now the source of truth for the "feel" of the feature. The technical spec MUST support this experience.
5. Follow this execution flow:
1. Parse user description from Input 1. Parse user description from Input
If empty: ERROR "No feature description provided" If empty: ERROR "No feature description provided"
@@ -115,6 +130,12 @@ Given that feature description, do this:
- [ ] Focused on user value and business needs - [ ] Focused on user value and business needs
- [ ] Written for non-technical stakeholders - [ ] Written for non-technical stakeholders
- [ ] All mandatory sections completed - [ ] All mandatory sections completed
## UX Consistency
- [ ] Functional requirements fully support the 'Happy Path' in ux_reference.md
- [ ] Error handling requirements match the 'Error Experience' in ux_reference.md
- [ ] No requirements contradict the defined User Persona or Context
## Requirement Completeness ## Requirement Completeness
@@ -190,7 +211,7 @@ Given that feature description, do this:
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`). 7. Report completion with branch name, spec file path, ux_reference file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing. **NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.

View File

@@ -24,7 +24,7 @@ You **MUST** consider the user input before proceeding (if not empty).
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). 1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load design documents**: Read from FEATURE_DIR: 2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities) - **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities), ux_reference.md (experience source of truth)
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios) - **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
- Note: Not all projects have all documents. Generate tasks based on what's available. - Note: Not all projects have all documents. Generate tasks based on what's available.
@@ -70,6 +70,12 @@ The tasks.md should be immediately executable - each task must be specific enoug
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach. **Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
### UX Preservation (CRITICAL)
- **Source of Truth**: `ux_reference.md` is the absolute standard for the "feel" of the feature.
- **Violation Warning**: If any task would inherently violate the UX (e.g. "Remove progress bar to simplify code"), you **MUST** flag this to the user immediately.
- **Verification Task**: You **MUST** add a specific task at the end of each User Story phase: `- [ ] Txxx [USx] Verify implementation matches ux_reference.md (Happy Path & Errors)`
### Checklist Format (REQUIRED) ### Checklist Format (REQUIRED)
Every task MUST strictly follow this format: Every task MUST strictly follow this format:
@@ -113,7 +119,10 @@ Every task MUST strictly follow this format:
- If tests requested: Tests specific to that story - If tests requested: Tests specific to that story
- Mark story dependencies (most stories should be independent) - Mark story dependencies (most stories should be independent)
2. **From Contracts**: 2. **From Contracts (CRITICAL TIER)**:
- Identify components marked as `@TIER: CRITICAL` in `contracts/modules.md`.
- For these components, **MUST** append the summary of `@PRE`, `@POST`, and `@UX_STATE` contracts directly to the task description.
- Example: `- [ ] T005 [P] [US1] Implement Auth (CRITICAL: PRE: token exists, POST: returns User) in src/auth.py`
- Map each contract/endpoint → to the user story it serves - Map each contract/endpoint → to the user story it serves
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase - If tests requested: Each contract → contract test task [P] before implementation in that story's phase

View File

@@ -0,0 +1,116 @@
**speckit.tasks.md**
### Modified Workflow
```markdown
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
handoffs:
- label: Analyze For Consistency
agent: speckit.analyze
prompt: Run a project analysis for consistency
send: true
- label: Implement Project
agent: speckit.implement
prompt: Start the implementation in phases
send: true
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities), ux_reference.md (experience source of truth)
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions)
3. **Execute task generation workflow**:
- **Architecture Analysis (CRITICAL)**: Scan existing codebase for patterns (DI, Auth, ORM).
- Load plan.md/spec.md.
- Generate tasks organized by user story.
- **Apply Fractal Co-location**: Ensure all unit tests are mapped to `__tests__` subdirectories relative to the code.
- Validate task completeness.
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure.
- Phase 1: Context & Setup.
- Phase 2: Foundational tasks.
- Phase 3+: User Stories (Priority order).
- Final Phase: Polish.
- **Strict Constraint**: Ensure tasks follow the Co-location and Mocking rules below.
5. **Report**: Output path to generated tasks.md and summary.
Context for task generation: $ARGUMENTS
## Task Generation Rules
**CRITICAL**: Tasks MUST be actionable, specific, architecture-aware, and context-local.
### Implementation & Testing Constraints (ANTI-LOOP & CO-LOCATION)
To prevent infinite debugging loops and context fragmentation, apply these rules:
1. **Fractal Co-location Strategy (MANDATORY)**:
- **Rule**: Unit tests MUST live next to the code they verify.
- **Forbidden**: Do NOT create unit tests in root `tests/` or `backend/tests/`. Those are for E2E/Integration only.
- **Pattern (Python)**:
- Source: `src/domain/order/processing.py`
- Test Task: `Create tests in src/domain/order/__tests__/test_processing.py`
- **Pattern (Frontend)**:
- Source: `src/lib/components/UserCard.svelte`
- Test Task: `Create tests in src/lib/components/__tests__/UserCard.test.ts`
2. **Semantic Relations**:
- Test generation tasks must explicitly instruct to add the relation header: `# @RELATION: VERIFIES -> [TargetComponent]`
3. **Strict Mocking for Unit Tests**:
- Any task creating Unit Tests MUST specify: *"Use `unittest.mock.MagicMock` for heavy dependencies (DB sessions, Auth). Do NOT instantiate real service classes."*
4. **Schema/Model Separation**:
- Explicitly separate tasks for ORM Models (SQLAlchemy) and Pydantic Schemas.
### UX Preservation (CRITICAL)
- **Source of Truth**: `ux_reference.md` is the absolute standard.
- **Verification Task**: You **MUST** add a specific task at the end of each User Story phase: `- [ ] Txxx [USx] Verify implementation matches ux_reference.md (Happy Path & Errors)`
### Checklist Format (REQUIRED)
Every task MUST strictly follow this format:
```text
- [ ] [TaskID] [P?] [Story?] Description with file path
```
**Examples**:
-`- [ ] T005 [US1] Create unit tests for OrderService in src/services/__tests__/test_order.py (Mock DB)`
-`- [ ] T006 [US1] Implement OrderService in src/services/order.py`
-`- [ ] T005 [US1] Create tests in backend/tests/test_order.py` (VIOLATION: Wrong location)
### Task Organization & Phase Structure
**Phase 1: Context & Setup**
- **Goal**: Prepare environment and understand existing patterns.
- **Mandatory Task**: `- [ ] T001 Analyze existing project structure, auth patterns, and `conftest.py` location`
**Phase 2: Foundational (Data & Core)**
- Database Models (ORM).
- Pydantic Schemas (DTOs).
- Core Service interfaces.
**Phase 3+: User Stories (Iterative)**
- **Step 1: Isolation Tests (Co-located)**:
- `- [ ] Txxx [USx] Create unit tests for [Component] in [Path]/__tests__/test_[name].py`
- *Note: Specify using MagicMock for external deps.*
- **Step 2: Implementation**: Services -> Endpoints.
- **Step 3: Integration**: Wire up real dependencies (if E2E tests requested).
- **Step 4: UX Verification**.
**Final Phase: Polish**
- Linting, formatting, final manual verify.

View File

@@ -13,20 +13,6 @@ customModes:
- browser - browser
- mcp - mcp
customInstructions: 1. Always begin by loading the relevant plan or task list from the `specs/` directory. 2. Do not assume a task is done just because it is checked; verify the code or functionality first if asked to audit. 3. When updating task lists, ensure you only mark items as complete if you have verified them. customInstructions: 1. Always begin by loading the relevant plan or task list from the `specs/` directory. 2. Do not assume a task is done just because it is checked; verify the code or functionality first if asked to audit. 3. When updating task lists, ensure you only mark items as complete if you have verified them.
- slug: product-manager
name: Product Manager
description: Executes SpecKit workflows for feature management
roleDefinition: |-
You are Kilo Code, acting as a Product Manager. Your purpose is to rigorously execute the workflows defined in `.kilocode/workflows/`.
You act as the orchestrator for: - Specification (`speckit.specify`, `speckit.clarify`) - Planning (`speckit.plan`) - Task Management (`speckit.tasks`, `speckit.taskstoissues`) - Quality Assurance (`speckit.analyze`, `speckit.checklist`) - Governance (`speckit.constitution`) - Implementation Oversight (`speckit.implement`)
For each task, you must read the relevant workflow file from `.kilocode/workflows/` and follow its Execution Steps precisely.
whenToUse: Use this mode when you need to run any /speckit.* command or when dealing with high-level feature planning, specification writing, or project management tasks.
groups:
- read
- edit
- command
- mcp
customInstructions: 1. Always read the specific workflow file in `.kilocode/workflows/` before executing a command. 2. Adhere strictly to the "Operating Constraints" and "Execution Steps" in the workflow files.
- slug: semantic - slug: semantic
name: Semantic Agent name: Semantic Agent
roleDefinition: |- roleDefinition: |-
@@ -43,3 +29,18 @@ customModes:
- browser - browser
- mcp - mcp
source: project source: project
- slug: product-manager
name: Product Manager
roleDefinition: |-
Your purpose is to rigorously execute the workflows defined in `.kilocode/workflows/`.
You act as the orchestrator for: - Specification (`speckit.specify`, `speckit.clarify`) - Planning (`speckit.plan`) - Task Management (`speckit.tasks`, `speckit.taskstoissues`) - Quality Assurance (`speckit.analyze`, `speckit.checklist`) - Governance (`speckit.constitution`) - Implementation Oversight (`speckit.implement`)
For each task, you must read the relevant workflow file from `.kilocode/workflows/` and follow its Execution Steps precisely.
whenToUse: Use this mode when you need to run any /speckit.* command or when dealing with high-level feature planning, specification writing, or project management tasks.
description: Executes SpecKit workflows for feature management
customInstructions: 1. Always read the specific workflow file in `.kilocode/workflows/` before executing a command. 2. Adhere strictly to the "Operating Constraints" and "Execution Steps" in the workflow files.
groups:
- read
- edit
- command
- mcp
source: project

View File

@@ -1,8 +1,9 @@
<!-- <!--
SYNC IMPACT REPORT SYNC IMPACT REPORT
Version: 1.9.0 (Security & RBAC Mandate) Version: 2.2.0 (ConfigManager Discipline)
Changes: Changes:
- Added Principle IX: Security & Access Control (Mandating granular permissions for plugins). - Updated Principle II: Added mandatory requirement for using `ConfigManager` (via dependency injection) for all configuration access to ensure consistent environment handling and avoid hardcoded values.
- Updated Principle III: Refined `requestApi` requirement.
Templates Status: Templates Status:
- .specify/templates/plan-template.md: ✅ Aligned. - .specify/templates/plan-template.md: ✅ Aligned.
- .specify/templates/spec-template.md: ✅ Aligned. - .specify/templates/spec-template.md: ✅ Aligned.
@@ -13,64 +14,42 @@ Templates Status:
## Core Principles ## Core Principles
### I. Semantic Protocol Compliance ### I. Semantic Protocol Compliance
The file `semantic_protocol.md` is the **authoritative technical standard** for this project. All code generation, refactoring, and architecture must strictly adhere to the standards, syntax, and workflows defined therein. The file `semantic_protocol.md` is the **sole and authoritative technical standard** for this project.
- **Syntax**: `[DEF]` anchors, `@RELATION` tags, and metadata must match the Protocol specification. - **Law**: All code must adhere to the Axioms (Meaning First, Contract First, etc.) defined in the Protocol.
- **Structure**: File layouts and headers must follow the "File Structure Standard". - **Syntax & Structure**: Anchors (`[DEF]`), Tags (`@KEY`), and File Structures must strictly match the Protocol.
- **Workflow**: The technical steps for generating code must align with the Protocol. - **Compliance**: Any deviation from `semantic_protocol.md` constitutes a build failure.
### II. Causal Validity (Contracts First) ### II. Everything is a Plugin & Centralized Config
As defined in the Protocol, Semantic definitions (Contracts) must ALWAYS precede implementation code. Logic is downstream of definition. We define the structure and constraints (`[DEF]`, `@PRE`, `@POST`) before writing the executable logic. All functional extensions, tools, or major features must be implemented as modular Plugins inheriting from `PluginBase`.
- **Modularity**: Logic should not reside in standalone services or scripts unless strictly necessary for core infrastructure. This ensures a unified execution model via the `TaskManager`.
- **Configuration Discipline**: All configuration access (environments, settings, paths) MUST use the `ConfigManager`. In the backend, the singleton instance MUST be obtained via dependency injection (`get_config_manager()`). Hardcoding environment IDs (e.g., "1") or paths is STRICTLY FORBIDDEN.
### III. Immutability of Architecture ### III. Unified Frontend Experience
Architectural decisions in the Module Header (`@LAYER`, `@INVARIANT`, `@CONSTRAINT`) are treated as immutable constraints. Changes to these require an explicit refactoring step, not ad-hoc modification during implementation.
### IV. Design by Contract (DbC)
Contracts are the Source of Truth. Functions and Classes must define their purpose, specifications, and constraints in the metadata block before implementation, strictly following the **Contracts (Section IV)** standard in `semantic_protocol.md`.
### V. Belief State Logging
Agents must maintain belief state logs for debugging and coherence checks, strictly following the **Logging Standard (Section V)** defined in `semantic_protocol.md`.
### VI. Fractal Complexity Limit
To maintain semantic coherence, code must adhere to the complexity limits (Module/Function size) defined in the **Fractal Complexity Limit (Section VI)** of `semantic_protocol.md`.
### VII. Everything is a Plugin
All functional extensions, tools, or major features must be implemented as modular Plugins inheriting from `PluginBase`. Logic should not reside in standalone services or scripts unless strictly necessary for core infrastructure. This ensures a unified execution model via the `TaskManager`, consistent logging, and modularity.
### VIII. Unified Frontend Experience
To ensure a consistent and accessible user experience, all frontend implementations must strictly adhere to the unified design and localization standards. To ensure a consistent and accessible user experience, all frontend implementations must strictly adhere to the unified design and localization standards.
- **Component Reusability**: All UI elements MUST utilize the standardized Svelte component library (`src/lib/ui`) and centralized design tokens. Ad-hoc styling and hardcoded values are prohibited. - **Component Reusability**: All UI elements MUST utilize the standardized Svelte component library (`src/lib/ui`) and centralized design tokens.
- **Internationalization (i18n)**: All user-facing text MUST be extracted to the translation system (`src/lib/i18n`). Hardcoded strings in the UI are prohibited. - **Internationalization (i18n)**: All user-facing text MUST be extracted to the translation system (`src/lib/i18n`).
- **Backend Communication**: All API requests MUST use the `requestApi` wrapper (or its derivatives like `fetchApi`, `postApi`) from `src/lib/api.js`. Direct use of the native `fetch` API for backend communication is FORBIDDEN to ensure consistent authentication (JWT) and error handling.
### IX. Security & Access Control ### IV. Security & Access Control
To support the Role-Based Access Control (RBAC) system, all functional components must define explicit permissions. To support the Role-Based Access Control (RBAC) system, all functional components must define explicit permissions.
- **Granular Permissions**: Every Plugin MUST define a unique permission string (e.g., `plugin:name:execute`) required for its operation. - **Granular Permissions**: Every Plugin MUST define a unique permission string (e.g., `plugin:name:execute`) required for its operation.
- **Registration**: These permissions MUST be registered in the system database during initialization or plugin loading to ensure they are available for role assignment in the Admin UI. - **Registration**: These permissions MUST be registered in the system database (`auth.db`) during initialization.
## File Structure Standards ### V. Independent Testability
Refer to **Section III (File Structure Standard)** in `semantic_protocol.md` for the authoritative definitions of: Every feature specification MUST define "Independent Tests" that allow the feature to be verified in isolation.
- Python Module Headers (`.py`) - **Decoupling**: Features should be designed such that they can be tested without requiring the full application state or external dependencies where possible.
- Svelte Component Headers (`.svelte`) - **Verification**: A feature is not complete until its Independent Test scenarios pass.
## Generation Workflow ### VI. Asynchronous Execution
The development process follows a streamlined single-phase workflow: All long-running or resource-intensive operations (migrations, analysis, backups, external API calls) MUST be executed as asynchronous tasks via the `TaskManager`.
- **Non-Blocking**: HTTP API endpoints MUST NOT block on these operations; they should spawn a task and return a Task ID.
### 1. Code Generation Phase (Mode: `code`) - **Observability**: Tasks MUST emit real-time status updates via the WebSocket infrastructure.
**Input**: `tasks.md`
**Responsibility**:
- Select task from `tasks.md`.
- Generate Scaffolding (`[DEF]` anchors, Headers, Contracts) AND Implementation in one pass.
- Ensure strict adherence to Protocol Section IV (Contracts) and Section VII (Generation Workflow).
- **Output**: Working code with passing tests.
### 2. Validation
If logic conflicts with Contract -> Stop -> Report Error.
## Governance ## Governance
This Constitution establishes the "Semantic Code Generation Protocol" as the supreme law of this repository. This Constitution establishes the "Semantic Code Generation Protocol" as the supreme law of this repository.
- **Authoritative Source**: `semantic_protocol.md` defines the specific implementation rules for these Principles. - **Authoritative Source**: `semantic_protocol.md` defines the specific implementation rules for Principle I.
- **Automated Enforcement**: Tools must validate adherence to the `semantic_protocol.md` syntax.
- **Amendments**: Changes to core principles require a Constitution amendment. Changes to technical syntax require a Protocol update. - **Amendments**: Changes to core principles require a Constitution amendment. Changes to technical syntax require a Protocol update.
- **Compliance**: Failure to adhere to the Protocol constitutes a build failure. - **Compliance**: Failure to adhere to the Protocol constitutes a build failure.
**Version**: 1.9.0 | **Ratified**: 2025-12-19 | **Last Amended**: 2026-01-27 **Version**: 2.2.0 | **Ratified**: 2025-12-19 | **Last Amended**: 2026-01-29

View File

@@ -1,6 +1,7 @@
# Feature Specification: [FEATURE NAME] # Feature Specification: [FEATURE NAME]
**Feature Branch**: `[###-feature-name]` **Feature Branch**: `[###-feature-name]`
**Reference UX**: `[ux_reference.md]` (See specific folder)
**Created**: [DATE] **Created**: [DATE]
**Status**: Draft **Status**: Draft
**Input**: User description: "$ARGUMENTS" **Input**: User description: "$ARGUMENTS"

View File

@@ -1,35 +0,0 @@
---
description: "Architecture task list template (Contracts & Scaffolding)"
---
# Architecture Tasks: [FEATURE NAME]
**Role**: Architect Agent
**Goal**: Define the "What" and "Why" (Contracts, Scaffolding, Models) before implementation.
**Input**: Design documents from `/specs/[###-feature-name]/`
**Output**: Files with `[DEF]` anchors, `@PRE`/`@POST` contracts, and `@RELATION` mappings. No business logic.
## Phase 1: Setup & Models
- [ ] A001 Create/Update data models in [path] with `[DEF]` and contracts
- [ ] A002 Define API route structure/contracts in [path]
- [ ] A003 Define shared utilities/interfaces
## Phase 2: User Story 1 - [Title]
- [ ] A004 [US1] Define contracts for [Component/Service] in [path]
- [ ] A005 [US1] Define contracts for [Endpoint] in [path]
- [ ] A006 [US1] Define contracts for [Frontend Component] in [path]
## Phase 3: User Story 2 - [Title]
- [ ] A007 [US2] Define contracts for [Component/Service] in [path]
- [ ] A008 [US2] Define contracts for [Endpoint] in [path]
## Handover Checklist
- [ ] All new files created with `[DEF]` anchors
- [ ] All functions/classes have `@PURPOSE`, `@PRE`, `@POST` tags
- [ ] No "naked code" (logic outside of anchors)
- [ ] `tasks-dev.md` is ready for the Developer Agent

View File

@@ -1,35 +0,0 @@
---
description: "Developer task list template (Implementation Logic)"
---
# Developer Tasks: [FEATURE NAME]
**Role**: Developer Agent
**Goal**: Implement the "How" (Logic, State, Error Handling) inside the defined contracts.
**Input**: `tasks-arch.md` (completed), Scaffolding files with `[DEF]` anchors.
**Output**: Working code that satisfies `@PRE`/`@POST` conditions.
## Phase 1: Setup & Models
- [ ] D001 Implement logic for [Model] in [path]
- [ ] D002 Implement logic for [API Route] in [path]
- [ ] D003 Implement shared utilities
## Phase 2: User Story 1 - [Title]
- [ ] D004 [US1] Implement logic for [Component/Service] in [path]
- [ ] D005 [US1] Implement logic for [Endpoint] in [path]
- [ ] D006 [US1] Implement logic for [Frontend Component] in [path]
- [ ] D007 [US1] Verify semantic compliance and belief state logging
## Phase 3: User Story 2 - [Title]
- [ ] D008 [US2] Implement logic for [Component/Service] in [path]
- [ ] D009 [US2] Implement logic for [Endpoint] in [path]
## Polish & Quality Assurance
- [ ] DXXX Verify all tests pass
- [ ] DXXX Check error handling and edge cases
- [ ] DXXX Ensure code style compliance

View File

@@ -0,0 +1,67 @@
# UX Reference: [FEATURE NAME]
**Feature Branch**: `[###-feature-name]`
**Created**: [DATE]
**Status**: Draft
## 1. User Persona & Context
* **Who is the user?**: [e.g. Junior Developer, System Administrator, End User]
* **What is their goal?**: [e.g. Quickly deploy a hotfix, Visualize complex data]
* **Context**: [e.g. Running a command in a terminal on a remote server, Browsing the dashboard on a mobile device]
## 2. The "Happy Path" Narrative
[Write a short story (3-5 sentences) describing the perfect interaction from the user's perspective. Focus on how it *feels* - is it instant? Does it guide them?]
## 3. Interface Mockups
### CLI Interaction (if applicable)
```bash
# User runs this command:
$ command --flag value
# System responds immediately with:
[ spinner ] specific loading message...
# Success output:
✅ Operation completed successfully in 1.2s
- Created file: /path/to/file
- Updated config: /path/to/config
```
### UI Layout & Flow (if applicable)
**Screen/Component**: [Name]
* **Layout**: [Description of structure, e.g., "Two-column layout, left sidebar navigation..."]
* **Key Elements**:
* **[Button Name]**: Primary action. Color: Blue.
* **[Input Field]**: Placeholder text: "Enter your name...". Validation: Real-time.
* **States**:
* **Default**: Clean state, waiting for input.
* **Loading**: Skeleton loader replaces content area.
* **Success**: Toast notification appears top-right: "Saved!" (Green).
## 4. The "Error" Experience
**Philosophy**: Don't just report the error; guide the user to the fix.
### Scenario A: [Common Error, e.g. Invalid Input]
* **User Action**: Enters "123" in a text-only field.
* **System Response**:
* (UI) Input border turns Red. Message below input: "Please enter text only."
* (CLI) `❌ Error: Invalid input '123'. Expected text format.`
* **Recovery**: User can immediately re-type without refreshing/re-running.
### Scenario B: [System Failure, e.g. Network Timeout]
* **System Response**: "Unable to connect. Retrying in 3s... (Press C to cancel)"
* **Recovery**: Automatic retry or explicit "Retry Now" button.
## 5. Tone & Voice
* **Style**: [e.g. Concise, Technical, Friendly, Verbose]
* **Terminology**: [e.g. Use "Repository" not "Repo", "Directory" not "Folder"]

158
README.md
View File

@@ -1,119 +1,77 @@
# Инструменты автоматизации Superset # Инструменты автоматизации Superset (ss-tools)
## Обзор ## Обзор
Этот репозиторий содержит Python-скрипты и библиотеку (`superset_tool`) для автоматизации задач в Apache Superset, таких как: **ss-tools** — это современная платформа для автоматизации и управления экосистемой Apache Superset. Проект перешел от набора CLI-скриптов к полноценному веб-приложению с архитектурой Backend (FastAPI) + Frontend (SvelteKit), обеспечивая удобный интерфейс для сложных операций.
- **Резервное копирование**: Экспорт всех дашбордов из экземпляра Superset в локальное хранилище.
- **Миграция**: Перенос и преобразование дашбордов между разными средами Superset (например, Development, Sandbox, Production). ## Основные возможности
### 🚀 Миграция и управление дашбордами
- **Dashboard Grid**: Удобный просмотр всех дашбордов во всех окружениях (Dev, Sandbox, Prod) в едином интерфейсе.
- **Интеллектуальный маппинг**: Автоматическое и ручное сопоставление датасетов, таблиц и схем при переносе между окружениями.
- **Проверка зависимостей**: Валидация наличия всех необходимых компонентов перед миграцией.
### 📦 Резервное копирование
- **Планировщик (Scheduler)**: Автоматическое создание резервных копий дашбордов и датасетов по расписанию.
- **Хранилище**: Локальное хранение артефактов с возможностью управления через UI.
### 🛠 Git Интеграция
- **Version Control**: Возможность версионирования ассетов Superset.
- **Git Dashboard**: Управление ветками, коммитами и деплоем изменений напрямую из интерфейса.
- **Conflict Resolution**: Встроенные инструменты для разрешения конфликтов в YAML-конфигурациях.
### 🤖 LLM Анализ (AI Plugin)
- **Автоматический аудит**: Анализ состояния дашбордов на основе скриншотов и метаданных.
- **Генерация документации**: Автоматическое описание датасетов и колонок с помощью LLM (OpenAI, OpenRouter и др.).
- **Smart Validation**: Поиск аномалий и ошибок в визуализациях.
### 🔐 Безопасность и администрирование
- **Multi-user Auth**: Многопользовательский доступ с ролевой моделью (RBAC).
- **Управление подключениями**: Централизованная настройка доступов к различным инстансам Superset.
- **Логирование**: Подробная история выполнения всех фоновых задач.
## Технологический стек
- **Backend**: Python 3.9+, FastAPI, SQLAlchemy, APScheduler, Pydantic.
- **Frontend**: Node.js 18+, SvelteKit, Tailwind CSS.
- **Database**: SQLite (для хранения метаданных, задач и настроек доступа).
## Структура проекта ## Структура проекта
- `backup_script.py`: Основной скрипт для выполнения запланированного резервного копирования дашбордов Superset. - `backend/` — Серверная часть, API и логика плагинов.
- `migration_script.py`: Основной скрипт для переноса конкретных дашбордов между окружениями, включая переопределение соединений с базами данных. - `frontend/` — Клиентская часть (SvelteKit приложение).
- `search_script.py`: Скрипт для поиска данных во всех доступных датасетах на сервере - `specs/` — Спецификации функций и планы реализации.
- `run_mapper.py`: CLI-скрипт для маппинга метаданных датасетов. - `docs/` — Дополнительная документация по маппингу и разработке плагинов.
- `superset_tool/`:
- `client.py`: Python-клиент для взаимодействия с API Superset.
- `exceptions.py`: Пользовательские классы исключений для структурированной обработки ошибок.
- `models.py`: Pydantic-модели для валидации конфигурационных данных.
- `utils/`:
- `fileio.py`: Утилиты для работы с файловой системой (работа с архивами, парсинг YAML).
- `logger.py`: Конфигурация логгера для единообразного логирования в проекте.
- `network.py`: HTTP-клиент для сетевых запросов с обработкой аутентификации и повторных попыток.
- `init_clients.py`: Утилита для инициализации клиентов Superset для разных окружений.
- `dataset_mapper.py`: Логика маппинга метаданных датасетов.
## Настройка ## Быстрый старт
### Требования ### Требования
- Python 3.9+ - Python 3.9+
- `pip` для управления пакетами. - Node.js 18+
- `keyring` для безопасного хранения паролей. - Настроенный доступ к API Superset
### Установка ### Запуск
1. **Клонируйте репозиторий:** Для автоматической настройки окружений и запуска обоих серверов (Backend & Frontend) используйте скрипт:
```bash
git clone https://prod.gitlab.dwh.rusal.com/dwh_bi/superset-tools.git
cd superset-tools
```
2. **Установите зависимости:**
```bash
pip install -r requirements.txt
```
(Возможно, потребуется создать `requirements.txt` с `pydantic`, `requests`, `keyring`, `PyYAML`, `urllib3`)
3. **Настройте пароли:**
Используйте `keyring` для хранения паролей API-пользователей Superset.
```python
import keyring
keyring.set_password("system", "dev migrate", "пароль пользователя migrate_user")
keyring.set_password("system", "prod migrate", "пароль пользователя migrate_user")
keyring.set_password("system", "sandbox migrate", "пароль пользователя migrate_user")
```
## Использование
### Запуск проекта (Web UI)
Для запуска backend и frontend серверов одной командой:
```bash ```bash
./run.sh ./run.sh
``` ```
*Скрипт создаст виртуальное окружение Python, установит зависимости `pip` и `npm`, и запустит сервисы.*
Опции: Опции:
- `--skip-install`: Пропустить проверку и установку зависимостей. - `--skip-install`: Пропустить установку зависимостей.
- `--help`: Показать справку. - `--help`: Показать справку.
Переменные окружения: Переменные окружения:
- `BACKEND_PORT`: Порт для backend (по умолчанию 8000). - `BACKEND_PORT`: Порт API (по умолчанию 8000).
- `FRONTEND_PORT`: Порт для frontend (по умолчанию 5173). - `FRONTEND_PORT`: Порт UI (по умолчанию 5173).
### Скрипт резервного копирования (`backup_script.py`) ## Разработка
Для создания резервных копий дашбордов из настроенных окружений Superset: Проект следует строгим правилам разработки:
```bash 1. **Semantic Code Generation**: Использование протокола `semantic_protocol.md` для обеспечения надежности кода.
python backup_script.py 2. **Design by Contract (DbC)**: Определение предусловий и постусловий для ключевых функций.
``` 3. **Constitution**: Соблюдение правил, описанных в конституции проекта в папке `.specify/`.
Резервные копии сохраняются в `P:\Superset\010 Бекапы\` по умолчанию. Логи хранятся в `P:\Superset\010 Бекапы\Logs`.
### Скрипт миграции (`migration_script.py`) ### Полезные команды
Для переноса конкретного дашборда: - **Backend**: `cd backend && .venv/bin/python3 -m uvicorn src.app:app --reload`
```bash - **Frontend**: `cd frontend && npm run dev`
python migration_script.py - **Тесты**: `cd backend && .venv/bin/pytest`
```
### Скрипт поиска (`search_script.py`) ## Контакты и вклад
Для поиска по текстовым паттернам в метаданных датасетов Superset: Для добавления новых функций или исправления ошибок, пожалуйста, ознакомьтесь с `docs/plugin_dev.md` и создайте соответствующую спецификацию в `specs/`.
```bash
python search_script.py
```
Скрипт использует регулярные выражения для поиска в полях датасетов, таких как SQL-запросы. Результаты поиска выводятся в лог и в консоль.
### Скрипт маппинга метаданных (`run_mapper.py`)
Для обновления метаданных датасета (например, verbose names) в Superset:
```bash
python run_mapper.py --source <source_type> --dataset-id <dataset_id> [--table-name <table_name>] [--table-schema <table_schema>] [--excel-path <path_to_excel>] [--env <environment>]
```
Если вы используете XLSX - файл должен содержать два столбца - column_name | verbose_name
Параметры:
- `--source`: Источник данных ('postgres', 'excel' или 'both').
- `--dataset-id`: ID датасета для обновления.
- `--table-name`: Имя таблицы для PostgreSQL.
- `--table-schema`: Схема таблицы для PostgreSQL.
- `--excel-path`: Путь к Excel-файлу.
- `--env`: Окружение Superset ('dev', 'prod' и т.д.).
Пример использования:
```bash
python run_mapper.py --source postgres --dataset-id 123 --table-name account_debt --table-schema dm_view --env dev
python run_mapper.py --source=excel --dataset-id=286 --excel-path=H:\dev\ss-tools\286_map.xlsx --env=dev
```
## Логирование
Логи пишутся в файл в директории `Logs` (например, `P:\Superset\010 Бекапы\Logs` для резервных копий) и выводятся в консоль. Уровень логирования по умолчанию — `INFO`.
## Разработка и вклад
- Следуйте **Semantic Code Generation Protocol** (см. `semantic_protocol.md`):
- Все определения обернуты в `[DEF]...[/DEF]`.
- Контракты (`@PRE`, `@POST`) определяются ДО реализации.
- Строгая типизация и иммутабельность архитектурных решений.
- Соблюдайте Конституцию проекта (`.specify/memory/constitution.md`).
- Используйте `Pydantic`-модели для валидации данных.
- Реализуйте всестороннюю обработку ошибок с помощью пользовательских исключений.

Binary file not shown.

Submodule backend/backend/git_repos/12 deleted from f46772443a

1
backend/get_full_key.py Normal file
View File

@@ -0,0 +1 @@
{"print(f'Length": {"else": "print('Provider not found')\ndb.close()"}}

1
backend/git_repos/12 Submodule

Submodule backend/git_repos/12 added at 57ab7e8679

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -50,4 +50,7 @@ psycopg2-binary
openpyxl openpyxl
GitPython==3.1.44 GitPython==3.1.44
itsdangerous itsdangerous
email-validator email-validator
openai
playwright
tenacity

View File

@@ -1 +1,3 @@
from . import plugins, tasks, settings, connections, environments, mappings, migration, git, storage, admin from . import plugins, tasks, settings, connections, environments, mappings, migration, git, storage, admin
__all__ = ['plugins', 'tasks', 'settings', 'connections', 'environments', 'mappings', 'migration', 'git', 'storage', 'admin']

View File

@@ -21,8 +21,8 @@ from ...schemas.auth import (
RoleSchema, RoleCreate, RoleUpdate, PermissionSchema, RoleSchema, RoleCreate, RoleUpdate, PermissionSchema,
ADGroupMappingSchema, ADGroupMappingCreate ADGroupMappingSchema, ADGroupMappingCreate
) )
from ...models.auth import User, Role, Permission, ADGroupMapping from ...models.auth import User, Role, ADGroupMapping
from ...dependencies import has_permission, get_current_user from ...dependencies import has_permission
from ...core.logger import logger, belief_scope from ...core.logger import logger, belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -11,7 +11,7 @@ from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from ...core.database import get_db from ...core.database import get_db
from ...models.connection import ConnectionConfig from ...models.connection import ConnectionConfig
from pydantic import BaseModel, Field from pydantic import BaseModel
from datetime import datetime from datetime import datetime
from ...core.logger import logger, belief_scope from ...core.logger import logger, belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -0,0 +1,105 @@
# [DEF:backend.src.api.routes.dashboards:Module]
#
# @TIER: STANDARD
# @SEMANTICS: api, dashboards, resources, hub
# @PURPOSE: API endpoints for the Dashboard Hub - listing dashboards with Git and task status
# @LAYER: API
# @RELATION: DEPENDS_ON -> backend.src.dependencies
# @RELATION: DEPENDS_ON -> backend.src.services.resource_service
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
#
# @INVARIANT: All dashboard responses include git_status and last_task metadata
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List, Optional
from pydantic import BaseModel, Field
from ...dependencies import get_config_manager, get_task_manager, get_resource_service, has_permission
from ...core.logger import logger, belief_scope
# [/SECTION]
router = APIRouter()
# [DEF:GitStatus:DataClass]
class GitStatus(BaseModel):
branch: Optional[str] = None
sync_status: Optional[str] = Field(None, pattern="^OK|DIFF$")
# [/DEF:GitStatus:DataClass]
# [DEF:LastTask:DataClass]
class LastTask(BaseModel):
task_id: Optional[str] = None
status: Optional[str] = Field(None, pattern="^RUNNING|SUCCESS|ERROR|WAITING_INPUT$")
# [/DEF:LastTask:DataClass]
# [DEF:DashboardItem:DataClass]
class DashboardItem(BaseModel):
id: int
title: str
slug: Optional[str] = None
url: Optional[str] = None
last_modified: Optional[str] = None
git_status: Optional[GitStatus] = None
last_task: Optional[LastTask] = None
# [/DEF:DashboardItem:DataClass]
# [DEF:DashboardsResponse:DataClass]
class DashboardsResponse(BaseModel):
dashboards: List[DashboardItem]
total: int
# [/DEF:DashboardsResponse:DataClass]
# [DEF:get_dashboards:Function]
# @PURPOSE: Fetch list of dashboards from a specific environment with Git status and last task status
# @PRE: env_id must be a valid environment ID
# @POST: Returns a list of dashboards with enhanced metadata
# @PARAM: env_id (str) - The environment ID to fetch dashboards from
# @PARAM: search (Optional[str]) - Filter by title/slug
# @RETURN: DashboardsResponse - List of dashboards with status metadata
# @RELATION: CALLS -> ResourceService.get_dashboards_with_status
@router.get("/api/dashboards", response_model=DashboardsResponse)
async def get_dashboards(
env_id: str,
search: Optional[str] = None,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
resource_service=Depends(get_resource_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_dashboards", f"env_id={env_id}, search={search}"):
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None)
if not env:
logger.error(f"[get_dashboards][Coherence:Failed] Environment not found: {env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Get all tasks for status lookup
all_tasks = task_manager.get_all_tasks()
# Fetch dashboards with status using ResourceService
dashboards = await resource_service.get_dashboards_with_status(env, all_tasks)
# Apply search filter if provided
if search:
search_lower = search.lower()
dashboards = [
d for d in dashboards
if search_lower in d.get('title', '').lower()
or search_lower in d.get('slug', '').lower()
]
logger.info(f"[get_dashboards][Coherence:OK] Returning {len(dashboards)} dashboards")
return DashboardsResponse(
dashboards=dashboards,
total=len(dashboards)
)
except Exception as e:
logger.error(f"[get_dashboards][Coherence:Failed] Failed to fetch dashboards: {e}")
raise HTTPException(status_code=503, detail=f"Failed to fetch dashboards: {str(e)}")
# [/DEF:get_dashboards:Function]
# [/DEF:backend.src.api.routes.dashboards:Module]

View File

@@ -0,0 +1,103 @@
# [DEF:backend.src.api.routes.datasets:Module]
#
# @TIER: STANDARD
# @SEMANTICS: api, datasets, resources, hub
# @PURPOSE: API endpoints for the Dataset Hub - listing datasets with mapping progress
# @LAYER: API
# @RELATION: DEPENDS_ON -> backend.src.dependencies
# @RELATION: DEPENDS_ON -> backend.src.services.resource_service
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
#
# @INVARIANT: All dataset responses include last_task metadata
# [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException
from typing import List, Optional
from pydantic import BaseModel, Field
from ...dependencies import get_config_manager, get_task_manager, get_resource_service, has_permission
from ...core.logger import logger, belief_scope
# [/SECTION]
router = APIRouter()
# [DEF:MappedFields:DataClass]
class MappedFields(BaseModel):
total: int
mapped: int
# [/DEF:MappedFields:DataClass]
# [DEF:LastTask:DataClass]
class LastTask(BaseModel):
task_id: Optional[str] = None
status: Optional[str] = Field(None, pattern="^RUNNING|SUCCESS|ERROR|WAITING_INPUT$")
# [/DEF:LastTask:DataClass]
# [DEF:DatasetItem:DataClass]
class DatasetItem(BaseModel):
id: int
table_name: str
schema: str
database: str
mapped_fields: Optional[MappedFields] = None
last_task: Optional[LastTask] = None
# [/DEF:DatasetItem:DataClass]
# [DEF:DatasetsResponse:DataClass]
class DatasetsResponse(BaseModel):
datasets: List[DatasetItem]
total: int
# [/DEF:DatasetsResponse:DataClass]
# [DEF:get_datasets:Function]
# @PURPOSE: Fetch list of datasets from a specific environment with mapping progress
# @PRE: env_id must be a valid environment ID
# @POST: Returns a list of datasets with enhanced metadata
# @PARAM: env_id (str) - The environment ID to fetch datasets from
# @PARAM: search (Optional[str]) - Filter by table name
# @RETURN: DatasetsResponse - List of datasets with status metadata
# @RELATION: CALLS -> ResourceService.get_datasets_with_status
@router.get("/api/datasets", response_model=DatasetsResponse)
async def get_datasets(
env_id: str,
search: Optional[str] = None,
config_manager=Depends(get_config_manager),
task_manager=Depends(get_task_manager),
resource_service=Depends(get_resource_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_datasets", f"env_id={env_id}, search={search}"):
# Validate environment exists
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None)
if not env:
logger.error(f"[get_datasets][Coherence:Failed] Environment not found: {env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
# Get all tasks for status lookup
all_tasks = task_manager.get_all_tasks()
# Fetch datasets with status using ResourceService
datasets = await resource_service.get_datasets_with_status(env, all_tasks)
# Apply search filter if provided
if search:
search_lower = search.lower()
datasets = [
d for d in datasets
if search_lower in d.get('table_name', '').lower()
]
logger.info(f"[get_datasets][Coherence:OK] Returning {len(datasets)} datasets")
return DatasetsResponse(
datasets=datasets,
total=len(datasets)
)
except Exception as e:
logger.error(f"[get_datasets][Coherence:Failed] Failed to fetch datasets: {e}")
raise HTTPException(status_code=503, detail=f"Failed to fetch datasets: {str(e)}")
# [/DEF:get_datasets:Function]
# [/DEF:backend.src.api.routes.datasets:Module]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.api.routes.environments:Module] # [DEF:backend.src.api.routes.environments:Module]
# #
# @TIER: STANDARD
# @SEMANTICS: api, environments, superset, databases # @SEMANTICS: api, environments, superset, databases
# @PURPOSE: API endpoints for listing environments and their databases. # @PURPOSE: API endpoints for listing environments and their databases.
# @LAYER: API # @LAYER: API
@@ -10,11 +11,10 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
from typing import List, Dict, Optional from typing import List, Optional
from ...dependencies import get_config_manager, get_scheduler_service, has_permission from ...dependencies import get_config_manager, get_scheduler_service, has_permission
from ...core.superset_client import SupersetClient from ...core.superset_client import SupersetClient
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from ...core.config_models import Environment as EnvModel
from ...core.logger import belief_scope from ...core.logger import belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.api.routes.git:Module] # [DEF:backend.src.api.routes.git:Module]
# #
# @TIER: STANDARD
# @SEMANTICS: git, routes, api, fastapi, repository, deployment # @SEMANTICS: git, routes, api, fastapi, repository, deployment
# @PURPOSE: Provides FastAPI endpoints for Git integration operations. # @PURPOSE: Provides FastAPI endpoints for Git integration operations.
# @LAYER: API # @LAYER: API
@@ -15,10 +16,10 @@ from typing import List, Optional
import typing import typing
from src.dependencies import get_config_manager, has_permission from src.dependencies import get_config_manager, has_permission
from src.core.database import get_db from src.core.database import get_db
from src.models.git import GitServerConfig, GitStatus, DeploymentEnvironment, GitRepository from src.models.git import GitServerConfig, GitRepository
from src.api.routes.git_schemas import ( from src.api.routes.git_schemas import (
GitServerConfigSchema, GitServerConfigCreate, GitServerConfigSchema, GitServerConfigCreate,
GitRepositorySchema, BranchSchema, BranchCreate, BranchSchema, BranchCreate,
BranchCheckout, CommitSchema, CommitCreate, BranchCheckout, CommitSchema, CommitCreate,
DeploymentEnvironmentSchema, DeployRequest, RepoInitRequest DeploymentEnvironmentSchema, DeployRequest, RepoInitRequest
) )
@@ -397,4 +398,59 @@ async def get_repository_diff(
raise HTTPException(status_code=400, detail=str(e)) raise HTTPException(status_code=400, detail=str(e))
# [/DEF:get_repository_diff:Function] # [/DEF:get_repository_diff:Function]
# [DEF:generate_commit_message:Function]
# @PURPOSE: Generate a suggested commit message using LLM.
# @PRE: Repository for `dashboard_id` is initialized.
# @POST: Returns a suggested commit message string.
@router.post("/repositories/{dashboard_id}/generate-message")
async def generate_commit_message(
dashboard_id: int,
db: Session = Depends(get_db),
_ = Depends(has_permission("plugin:git", "EXECUTE"))
):
with belief_scope("generate_commit_message"):
try:
# 1. Get Diff
diff = git_service.get_diff(dashboard_id, staged=True)
if not diff:
diff = git_service.get_diff(dashboard_id, staged=False)
if not diff:
return {"message": "No changes detected"}
# 2. Get History
history_objs = git_service.get_commit_history(dashboard_id, limit=5)
history = [h.message for h in history_objs if hasattr(h, 'message')]
# 3. Get LLM Client
from ...services.llm_provider import LLMProviderService
from ...plugins.llm_analysis.service import LLMClient
from ...plugins.llm_analysis.models import LLMProviderType
llm_service = LLMProviderService(db)
providers = llm_service.get_all_providers()
provider = next((p for p in providers if p.is_active), None)
if not provider:
raise HTTPException(status_code=400, detail="No active LLM provider found")
api_key = llm_service.get_decrypted_api_key(provider.id)
client = LLMClient(
provider_type=LLMProviderType(provider.provider_type),
api_key=api_key,
base_url=provider.base_url,
default_model=provider.default_model
)
# 4. Generate Message
from ...plugins.git.llm_extension import GitLLMExtension
extension = GitLLMExtension(client)
message = await extension.suggest_commit_message(diff, history)
return {"message": message}
except Exception as e:
logger.error(f"Failed to generate commit message: {e}")
raise HTTPException(status_code=400, detail=str(e))
# [/DEF:generate_commit_message:Function]
# [/DEF:backend.src.api.routes.git:Module] # [/DEF:backend.src.api.routes.git:Module]

View File

@@ -11,7 +11,6 @@
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import List, Optional from typing import List, Optional
from datetime import datetime from datetime import datetime
from uuid import UUID
from src.models.git import GitProvider, GitStatus, SyncStatus from src.models.git import GitProvider, GitStatus, SyncStatus
# [DEF:GitServerConfigBase:Class] # [DEF:GitServerConfigBase:Class]

View File

@@ -0,0 +1,207 @@
# [DEF:backend/src/api/routes/llm.py:Module]
# @TIER: STANDARD
# @SEMANTICS: api, routes, llm
# @PURPOSE: API routes for LLM provider configuration and management.
# @LAYER: UI (API)
from fastapi import APIRouter, Depends, HTTPException, status
from typing import List
from ...core.logger import logger
from ...schemas.auth import User
from ...dependencies import get_current_user as get_current_active_user
from ...plugins.llm_analysis.models import LLMProviderConfig, LLMProviderType
from ...services.llm_provider import LLMProviderService
from ...core.database import get_db
from sqlalchemy.orm import Session
# [DEF:router:Global]
# @PURPOSE: APIRouter instance for LLM routes.
router = APIRouter(prefix="/api/llm", tags=["LLM"])
# [/DEF:router:Global]
# [DEF:get_providers:Function]
# @PURPOSE: Retrieve all LLM provider configurations.
# @PRE: User is authenticated.
# @POST: Returns list of LLMProviderConfig.
@router.get("/providers", response_model=List[LLMProviderConfig])
async def get_providers(
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db)
):
"""
Get all LLM provider configurations.
"""
logger.info(f"[llm_routes][get_providers][Action] Fetching providers for user: {current_user.username}")
service = LLMProviderService(db)
providers = service.get_all_providers()
return [
LLMProviderConfig(
id=p.id,
provider_type=LLMProviderType(p.provider_type),
name=p.name,
base_url=p.base_url,
api_key="********",
default_model=p.default_model,
is_active=p.is_active
) for p in providers
]
# [/DEF:get_providers:Function]
# [DEF:create_provider:Function]
# @PURPOSE: Create a new LLM provider configuration.
# @PRE: User is authenticated and has admin permissions.
# @POST: Returns the created LLMProviderConfig.
@router.post("/providers", response_model=LLMProviderConfig, status_code=status.HTTP_201_CREATED)
async def create_provider(
config: LLMProviderConfig,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db)
):
"""
Create a new LLM provider configuration.
"""
service = LLMProviderService(db)
provider = service.create_provider(config)
return LLMProviderConfig(
id=provider.id,
provider_type=LLMProviderType(provider.provider_type),
name=provider.name,
base_url=provider.base_url,
api_key="********",
default_model=provider.default_model,
is_active=provider.is_active
)
# [/DEF:create_provider:Function]
# [DEF:update_provider:Function]
# @PURPOSE: Update an existing LLM provider configuration.
# @PRE: User is authenticated and has admin permissions.
# @POST: Returns the updated LLMProviderConfig.
@router.put("/providers/{provider_id}", response_model=LLMProviderConfig)
async def update_provider(
provider_id: str,
config: LLMProviderConfig,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db)
):
"""
Update an existing LLM provider configuration.
"""
service = LLMProviderService(db)
provider = service.update_provider(provider_id, config)
if not provider:
raise HTTPException(status_code=404, detail="Provider not found")
return LLMProviderConfig(
id=provider.id,
provider_type=LLMProviderType(provider.provider_type),
name=provider.name,
base_url=provider.base_url,
api_key="********",
default_model=provider.default_model,
is_active=provider.is_active
)
# [/DEF:update_provider:Function]
# [DEF:delete_provider:Function]
# @PURPOSE: Delete an LLM provider configuration.
# @PRE: User is authenticated and has admin permissions.
# @POST: Returns success status.
@router.delete("/providers/{provider_id}", status_code=status.HTTP_204_NO_CONTENT)
async def delete_provider(
provider_id: str,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db)
):
"""
Delete an LLM provider configuration.
"""
service = LLMProviderService(db)
if not service.delete_provider(provider_id):
raise HTTPException(status_code=404, detail="Provider not found")
return
# [/DEF:delete_provider:Function]
# [DEF:test_connection:Function]
# @PURPOSE: Test connection to an LLM provider.
# @PRE: User is authenticated.
# @POST: Returns success status and message.
@router.post("/providers/{provider_id}/test")
async def test_connection(
provider_id: str,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db)
):
logger.info(f"[llm_routes][test_connection][Action] Testing connection for provider_id: {provider_id}")
"""
Test connection to an LLM provider.
"""
from ...plugins.llm_analysis.service import LLMClient
service = LLMProviderService(db)
db_provider = service.get_provider(provider_id)
if not db_provider:
raise HTTPException(status_code=404, detail="Provider not found")
api_key = service.get_decrypted_api_key(provider_id)
# Check if API key was successfully decrypted
if not api_key:
logger.error(f"[llm_routes][test_connection] Failed to decrypt API key for provider {provider_id}")
raise HTTPException(
status_code=500,
detail="Failed to decrypt API key. The provider may have been encrypted with a different encryption key. Please update the provider with a new API key."
)
client = LLMClient(
provider_type=LLMProviderType(db_provider.provider_type),
api_key=api_key,
base_url=db_provider.base_url,
default_model=db_provider.default_model
)
try:
# Simple test call
await client.client.models.list()
return {"success": True, "message": "Connection successful"}
except Exception as e:
return {"success": False, "error": str(e)}
# [/DEF:test_connection:Function]
# [DEF:test_provider_config:Function]
# @PURPOSE: Test connection with a provided configuration (not yet saved).
# @PRE: User is authenticated.
# @POST: Returns success status and message.
@router.post("/providers/test")
async def test_provider_config(
config: LLMProviderConfig,
current_user: User = Depends(get_current_active_user)
):
"""
Test connection with a provided configuration.
"""
from ...plugins.llm_analysis.service import LLMClient
logger.info(f"[llm_routes][test_provider_config][Action] Testing config for {config.name}")
# Check if API key is provided
if not config.api_key or config.api_key == "********":
raise HTTPException(
status_code=400,
detail="API key is required for testing connection"
)
client = LLMClient(
provider_type=config.provider_type,
api_key=config.api_key,
base_url=config.base_url,
default_model=config.default_model
)
try:
# Simple test call
await client.client.models.list()
return {"success": True, "message": "Connection successful"}
except Exception as e:
return {"success": False, "error": str(e)}
# [/DEF:test_provider_config:Function]
# [/DEF:backend/src/api/routes/llm.py]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.api.routes.mappings:Module] # [DEF:backend.src.api.routes.mappings:Module]
# #
# @TIER: STANDARD
# @SEMANTICS: api, mappings, database, fuzzy-matching # @SEMANTICS: api, mappings, database, fuzzy-matching
# @PURPOSE: API endpoints for managing database mappings and getting suggestions. # @PURPOSE: API endpoints for managing database mappings and getting suggestions.
# @LAYER: API # @LAYER: API

View File

@@ -1,4 +1,5 @@
# [DEF:backend.src.api.routes.migration:Module] # [DEF:backend.src.api.routes.migration:Module]
# @TIER: STANDARD
# @SEMANTICS: api, migration, dashboards # @SEMANTICS: api, migration, dashboards
# @PURPOSE: API endpoints for migration operations. # @PURPOSE: API endpoints for migration operations.
# @LAYER: API # @LAYER: API
@@ -6,7 +7,7 @@
# @RELATION: DEPENDS_ON -> backend.src.models.dashboard # @RELATION: DEPENDS_ON -> backend.src.models.dashboard
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
from typing import List, Dict from typing import List
from ...dependencies import get_config_manager, get_task_manager, has_permission from ...dependencies import get_config_manager, get_task_manager, has_permission
from ...models.dashboard import DashboardMetadata, DashboardSelection from ...models.dashboard import DashboardMetadata, DashboardSelection
from ...core.superset_client import SupersetClient from ...core.superset_client import SupersetClient

View File

@@ -1,4 +1,5 @@
# [DEF:PluginsRouter:Module] # [DEF:PluginsRouter:Module]
# @TIER: STANDARD
# @SEMANTICS: api, router, plugins, list # @SEMANTICS: api, router, plugins, list
# @PURPOSE: Defines the FastAPI router for plugin-related endpoints, allowing clients to list available plugins. # @PURPOSE: Defines the FastAPI router for plugin-related endpoints, allowing clients to list available plugins.
# @LAYER: UI (API) # @LAYER: UI (API)

View File

@@ -12,15 +12,24 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
from typing import List from typing import List
from ...core.config_models import AppConfig, Environment, GlobalSettings from pydantic import BaseModel
from ...core.config_models import AppConfig, Environment, GlobalSettings, LoggingConfig
from ...models.storage import StorageConfig from ...models.storage import StorageConfig
from ...dependencies import get_config_manager, has_permission from ...dependencies import get_config_manager, has_permission
from ...core.config_manager import ConfigManager from ...core.config_manager import ConfigManager
from ...core.logger import logger, belief_scope from ...core.logger import logger, belief_scope
from ...core.superset_client import SupersetClient from ...core.superset_client import SupersetClient
import os
# [/SECTION] # [/SECTION]
# [DEF:LoggingConfigResponse:Class]
# @PURPOSE: Response model for logging configuration with current task log level.
# @SEMANTICS: logging, config, response
class LoggingConfigResponse(BaseModel):
level: str
task_log_level: str
enable_belief_state: bool
# [/DEF:LoggingConfigResponse:Class]
router = APIRouter() router = APIRouter()
# [DEF:get_settings:Function] # [DEF:get_settings:Function]
@@ -223,5 +232,83 @@ async def test_environment_connection(
return {"status": "error", "message": str(e)} return {"status": "error", "message": str(e)}
# [/DEF:test_environment_connection:Function] # [/DEF:test_environment_connection:Function]
# [DEF:get_logging_config:Function]
# @PURPOSE: Retrieves current logging configuration.
# @PRE: Config manager is available.
# @POST: Returns logging configuration.
# @RETURN: LoggingConfigResponse - The current logging config.
@router.get("/logging", response_model=LoggingConfigResponse)
async def get_logging_config(
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "READ"))
):
with belief_scope("get_logging_config"):
logging_config = config_manager.get_config().settings.logging
return LoggingConfigResponse(
level=logging_config.level,
task_log_level=logging_config.task_log_level,
enable_belief_state=logging_config.enable_belief_state
)
# [/DEF:get_logging_config:Function]
# [DEF:update_logging_config:Function]
# @PURPOSE: Updates logging configuration.
# @PRE: New logging config is provided.
# @POST: Logging configuration is updated and saved.
# @PARAM: config (LoggingConfig) - The new logging configuration.
# @RETURN: LoggingConfigResponse - The updated logging config.
@router.patch("/logging", response_model=LoggingConfigResponse)
async def update_logging_config(
config: LoggingConfig,
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "WRITE"))
):
with belief_scope("update_logging_config"):
logger.info(f"[update_logging_config][Entry] Updating logging config: level={config.level}, task_log_level={config.task_log_level}")
# Get current settings and update logging config
settings = config_manager.get_config().settings
settings.logging = config
config_manager.update_global_settings(settings)
return LoggingConfigResponse(
level=config.level,
task_log_level=config.task_log_level,
enable_belief_state=config.enable_belief_state
)
# [/DEF:update_logging_config:Function]
# [DEF:ConsolidatedSettingsResponse:Class]
class ConsolidatedSettingsResponse(BaseModel):
environments: List[dict]
connections: List[dict]
llm: dict
logging: dict
storage: dict
# [/DEF:ConsolidatedSettingsResponse:Class]
# [DEF:get_consolidated_settings:Function]
# @PURPOSE: Retrieves all settings categories in a single call
# @PRE: Config manager is available.
# @POST: Returns all consolidated settings.
# @RETURN: ConsolidatedSettingsResponse - All settings categories.
@router.get("/consolidated", response_model=ConsolidatedSettingsResponse)
async def get_consolidated_settings(
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "READ"))
):
with belief_scope("get_consolidated_settings"):
logger.info("[get_consolidated_settings][Entry] Fetching all consolidated settings")
config = config_manager.get_config()
return ConsolidatedSettingsResponse(
environments=config.environments,
connections=config.settings.connections,
llm=config.settings.llm,
logging=config.settings.logging,
storage=config.settings.storage
)
# [/DEF:get_consolidated_settings:Function]
# [/DEF:SettingsRouter:Module] # [/DEF:SettingsRouter:Module]

View File

@@ -1,5 +1,6 @@
# [DEF:storage_routes:Module] # [DEF:storage_routes:Module]
# #
# @TIER: STANDARD
# @SEMANTICS: storage, files, upload, download, backup, repository # @SEMANTICS: storage, files, upload, download, backup, repository
# @PURPOSE: API endpoints for file storage management (backups and repositories). # @PURPOSE: API endpoints for file storage management (backups and repositories).
# @LAYER: API # @LAYER: API

View File

@@ -1,15 +1,17 @@
# [DEF:TasksRouter:Module] # [DEF:TasksRouter:Module]
# @SEMANTICS: api, router, tasks, create, list, get # @TIER: STANDARD
# @SEMANTICS: api, router, tasks, create, list, get, logs
# @PURPOSE: Defines the FastAPI router for task-related endpoints, allowing clients to create, list, and get the status of tasks. # @PURPOSE: Defines the FastAPI router for task-related endpoints, allowing clients to create, list, and get the status of tasks.
# @LAYER: UI (API) # @LAYER: UI (API)
# @RELATION: Depends on the TaskManager. It is included by the main app. # @RELATION: Depends on the TaskManager. It is included by the main app.
from typing import List, Dict, Any, Optional from typing import List, Dict, Any, Optional
from fastapi import APIRouter, Depends, HTTPException, status from fastapi import APIRouter, Depends, HTTPException, status, Query
from pydantic import BaseModel from pydantic import BaseModel
from ...core.logger import belief_scope from ...core.logger import belief_scope
from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry
from ...dependencies import get_task_manager, has_permission from ...core.task_manager.models import LogFilter, LogStats
from ...dependencies import get_task_manager, has_permission, get_current_user
router = APIRouter() router = APIRouter()
@@ -34,13 +36,30 @@ class ResumeTaskRequest(BaseModel):
async def create_task( async def create_task(
request: CreateTaskRequest, request: CreateTaskRequest,
task_manager: TaskManager = Depends(get_task_manager), task_manager: TaskManager = Depends(get_task_manager),
_ = Depends(lambda req: has_permission(f"plugin:{req.plugin_id}", "EXECUTE")) current_user = Depends(get_current_user)
): ):
# Dynamic permission check based on plugin_id
has_permission(f"plugin:{request.plugin_id}", "EXECUTE")(current_user)
""" """
Create and start a new task for a given plugin. Create and start a new task for a given plugin.
""" """
with belief_scope("create_task"): with belief_scope("create_task"):
try: try:
# Special handling for validation task to include provider config
if request.plugin_id == "llm_dashboard_validation":
from ...core.database import SessionLocal
from ...services.llm_provider import LLMProviderService
db = SessionLocal()
try:
llm_service = LLMProviderService(db)
provider_id = request.params.get("provider_id")
if provider_id:
db_provider = llm_service.get_provider(provider_id)
if not db_provider:
raise ValueError(f"LLM Provider {provider_id} not found")
finally:
db.close()
task = await task_manager.create_task( task = await task_manager.create_task(
plugin_id=request.plugin_id, plugin_id=request.plugin_id,
params=request.params params=request.params
@@ -99,27 +118,93 @@ async def get_task(
@router.get("/{task_id}/logs", response_model=List[LogEntry]) @router.get("/{task_id}/logs", response_model=List[LogEntry])
# [DEF:get_task_logs:Function] # [DEF:get_task_logs:Function]
# @PURPOSE: Retrieve logs for a specific task. # @PURPOSE: Retrieve logs for a specific task with optional filtering.
# @PARAM: task_id (str) - The unique identifier of the task. # @PARAM: task_id (str) - The unique identifier of the task.
# @PARAM: level (Optional[str]) - Filter by log level (DEBUG, INFO, WARNING, ERROR).
# @PARAM: source (Optional[str]) - Filter by source component.
# @PARAM: search (Optional[str]) - Text search in message.
# @PARAM: offset (int) - Number of logs to skip.
# @PARAM: limit (int) - Maximum number of logs to return.
# @PARAM: task_manager (TaskManager) - The task manager instance. # @PARAM: task_manager (TaskManager) - The task manager instance.
# @PRE: task_id must exist. # @PRE: task_id must exist.
# @POST: Returns a list of log entries or raises 404. # @POST: Returns a list of log entries or raises 404.
# @RETURN: List[LogEntry] - List of log entries. # @RETURN: List[LogEntry] - List of log entries.
# @TIER: CRITICAL
async def get_task_logs( async def get_task_logs(
task_id: str, task_id: str,
level: Optional[str] = Query(None, description="Filter by log level (DEBUG, INFO, WARNING, ERROR)"),
source: Optional[str] = Query(None, description="Filter by source component"),
search: Optional[str] = Query(None, description="Text search in message"),
offset: int = Query(0, ge=0, description="Number of logs to skip"),
limit: int = Query(100, ge=1, le=1000, description="Maximum number of logs to return"),
task_manager: TaskManager = Depends(get_task_manager), task_manager: TaskManager = Depends(get_task_manager),
_ = Depends(has_permission("tasks", "READ")) _ = Depends(has_permission("tasks", "READ"))
): ):
""" """
Retrieve logs for a specific task. Retrieve logs for a specific task with optional filtering.
Supports filtering by level, source, and text search.
""" """
with belief_scope("get_task_logs"): with belief_scope("get_task_logs"):
task = task_manager.get_task(task_id) task = task_manager.get_task(task_id)
if not task: if not task:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found") raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
return task_manager.get_task_logs(task_id)
log_filter = LogFilter(
level=level.upper() if level else None,
source=source,
search=search,
offset=offset,
limit=limit
)
return task_manager.get_task_logs(task_id, log_filter)
# [/DEF:get_task_logs:Function] # [/DEF:get_task_logs:Function]
@router.get("/{task_id}/logs/stats", response_model=LogStats)
# [DEF:get_task_log_stats:Function]
# @PURPOSE: Get statistics about logs for a task (counts by level and source).
# @PARAM: task_id (str) - The unique identifier of the task.
# @PARAM: task_manager (TaskManager) - The task manager instance.
# @PRE: task_id must exist.
# @POST: Returns log statistics or raises 404.
# @RETURN: LogStats - Statistics about task logs.
async def get_task_log_stats(
task_id: str,
task_manager: TaskManager = Depends(get_task_manager),
_ = Depends(has_permission("tasks", "READ"))
):
"""
Get statistics about logs for a task (counts by level and source).
"""
with belief_scope("get_task_log_stats"):
task = task_manager.get_task(task_id)
if not task:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
return task_manager.get_task_log_stats(task_id)
# [/DEF:get_task_log_stats:Function]
@router.get("/{task_id}/logs/sources", response_model=List[str])
# [DEF:get_task_log_sources:Function]
# @PURPOSE: Get unique sources for a task's logs.
# @PARAM: task_id (str) - The unique identifier of the task.
# @PARAM: task_manager (TaskManager) - The task manager instance.
# @PRE: task_id must exist.
# @POST: Returns list of unique source names or raises 404.
# @RETURN: List[str] - Unique source names.
async def get_task_log_sources(
task_id: str,
task_manager: TaskManager = Depends(get_task_manager),
_ = Depends(has_permission("tasks", "READ"))
):
"""
Get unique sources for a task's logs.
"""
with belief_scope("get_task_log_sources"):
task = task_manager.get_task(task_id)
if not task:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Task not found")
return task_manager.get_task_log_sources(task_id)
# [/DEF:get_task_log_sources:Function]
@router.post("/{task_id}/resolve", response_model=Task) @router.post("/{task_id}/resolve", response_model=Task)
# [DEF:resolve_task:Function] # [DEF:resolve_task:Function]
# @PURPOSE: Resolve a task that is awaiting mapping. # @PURPOSE: Resolve a task that is awaiting mapping.

View File

@@ -1,27 +1,28 @@
# [DEF:AppModule:Module] # [DEF:AppModule:Module]
# @TIER: CRITICAL
# @SEMANTICS: app, main, entrypoint, fastapi # @SEMANTICS: app, main, entrypoint, fastapi
# @PURPOSE: The main entry point for the FastAPI application. It initializes the app, configures CORS, sets up dependencies, includes API routers, and defines the WebSocket endpoint for log streaming. # @PURPOSE: The main entry point for the FastAPI application. It initializes the app, configures CORS, sets up dependencies, includes API routers, and defines the WebSocket endpoint for log streaming.
# @LAYER: UI (API) # @LAYER: UI (API)
# @RELATION: Depends on the dependency module and API route modules. # @RELATION: Depends on the dependency module and API route modules.
import sys # @INVARIANT: Only one FastAPI app instance exists per process.
# @INVARIANT: All WebSocket connections must be properly cleaned up on disconnect.
from pathlib import Path from pathlib import Path
# project_root is used for static files mounting # project_root is used for static files mounting
project_root = Path(__file__).resolve().parent.parent.parent project_root = Path(__file__).resolve().parent.parent.parent
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Depends, Request, HTTPException from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Request, HTTPException
from starlette.middleware.sessions import SessionMiddleware from starlette.middleware.sessions import SessionMiddleware
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse from fastapi.responses import FileResponse
import asyncio import asyncio
import os
from .dependencies import get_task_manager, get_scheduler_service from .dependencies import get_task_manager, get_scheduler_service
from .core.utils.network import NetworkError
from .core.logger import logger, belief_scope from .core.logger import logger, belief_scope
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin, llm, dashboards, datasets
from .api import auth from .api import auth
from .core.database import init_db
# [DEF:App:Global] # [DEF:App:Global]
# @SEMANTICS: app, fastapi, instance # @SEMANTICS: app, fastapi, instance
@@ -77,13 +78,34 @@ app.add_middleware(
# @POST: Logs request and response details. # @POST: Logs request and response details.
# @PARAM: request (Request) - The incoming request object. # @PARAM: request (Request) - The incoming request object.
# @PARAM: call_next (Callable) - The next middleware or route handler. # @PARAM: call_next (Callable) - The next middleware or route handler.
@app.exception_handler(NetworkError)
async def network_error_handler(request: Request, exc: NetworkError):
with belief_scope("network_error_handler"):
logger.error(f"Network error: {exc}")
return HTTPException(
status_code=503,
detail="Environment unavailable. Please check if the Superset instance is running."
)
@app.middleware("http") @app.middleware("http")
async def log_requests(request: Request, call_next): async def log_requests(request: Request, call_next):
with belief_scope("log_requests", f"{request.method} {request.url.path}"): # Avoid spamming logs for polling endpoints
logger.info(f"[DEBUG] Incoming request: {request.method} {request.url.path}") is_polling = request.url.path.endswith("/api/tasks") and request.method == "GET"
if not is_polling:
logger.info(f"Incoming request: {request.method} {request.url.path}")
try:
response = await call_next(request) response = await call_next(request)
logger.info(f"[DEBUG] Response status: {response.status_code} for {request.url.path}") if not is_polling:
logger.info(f"Response status: {response.status_code} for {request.url.path}")
return response return response
except NetworkError as e:
logger.error(f"Network error caught in middleware: {e}")
raise HTTPException(
status_code=503,
detail="Environment unavailable. Please check if the Superset instance is running."
)
# [/DEF:log_requests:Function] # [/DEF:log_requests:Function]
# Include API routes # Include API routes
@@ -97,29 +119,71 @@ app.include_router(environments.router, prefix="/api/environments", tags=["Envir
app.include_router(mappings.router) app.include_router(mappings.router)
app.include_router(migration.router) app.include_router(migration.router)
app.include_router(git.router) app.include_router(git.router)
app.include_router(llm.router)
app.include_router(storage.router, prefix="/api/storage", tags=["Storage"]) app.include_router(storage.router, prefix="/api/storage", tags=["Storage"])
app.include_router(dashboards.router, tags=["Dashboards"])
app.include_router(datasets.router, tags=["Datasets"])
# [DEF:websocket_endpoint:Function] # [DEF:websocket_endpoint:Function]
# @PURPOSE: Provides a WebSocket endpoint for real-time log streaming of a task. # @PURPOSE: Provides a WebSocket endpoint for real-time log streaming of a task with server-side filtering.
# @PRE: task_id must be a valid task ID. # @PRE: task_id must be a valid task ID.
# @POST: WebSocket connection is managed and logs are streamed until disconnect. # @POST: WebSocket connection is managed and logs are streamed until disconnect.
# @TIER: CRITICAL
# @UX_STATE: Connecting -> Streaming -> (Disconnected)
@app.websocket("/ws/logs/{task_id}") @app.websocket("/ws/logs/{task_id}")
async def websocket_endpoint(websocket: WebSocket, task_id: str): async def websocket_endpoint(
websocket: WebSocket,
task_id: str,
source: str = None,
level: str = None
):
"""
WebSocket endpoint for real-time log streaming with optional server-side filtering.
Query Parameters:
source: Filter logs by source component (e.g., "plugin", "superset_api")
level: Filter logs by minimum level (DEBUG, INFO, WARNING, ERROR)
"""
with belief_scope("websocket_endpoint", f"task_id={task_id}"): with belief_scope("websocket_endpoint", f"task_id={task_id}"):
await websocket.accept() await websocket.accept()
logger.info(f"WebSocket connection accepted for task {task_id}")
# Normalize filter parameters
source_filter = source.lower() if source else None
level_filter = level.upper() if level else None
# Level hierarchy for filtering
level_hierarchy = {"DEBUG": 0, "INFO": 1, "WARNING": 2, "ERROR": 3}
min_level = level_hierarchy.get(level_filter, 0) if level_filter else 0
logger.info(f"WebSocket connection accepted for task {task_id} (source={source_filter}, level={level_filter})")
task_manager = get_task_manager() task_manager = get_task_manager()
queue = await task_manager.subscribe_logs(task_id) queue = await task_manager.subscribe_logs(task_id)
def matches_filters(log_entry) -> bool:
"""Check if log entry matches the filter criteria."""
# Check source filter
if source_filter and log_entry.source.lower() != source_filter:
return False
# Check level filter
if level_filter:
log_level = level_hierarchy.get(log_entry.level.upper(), 0)
if log_level < min_level:
return False
return True
try: try:
# Stream new logs # Stream new logs
logger.info(f"Starting log stream for task {task_id}") logger.info(f"Starting log stream for task {task_id}")
# Send initial logs first to build context # Send initial logs first to build context (apply filters)
initial_logs = task_manager.get_task_logs(task_id) initial_logs = task_manager.get_task_logs(task_id)
for log_entry in initial_logs: for log_entry in initial_logs:
log_dict = log_entry.dict() if matches_filters(log_entry):
log_dict['timestamp'] = log_dict['timestamp'].isoformat() log_dict = log_entry.dict()
await websocket.send_json(log_dict) log_dict['timestamp'] = log_dict['timestamp'].isoformat()
await websocket.send_json(log_dict)
# Force a check for AWAITING_INPUT status immediately upon connection # Force a check for AWAITING_INPUT status immediately upon connection
# This ensures that if the task is already waiting when the user connects, they get the prompt. # This ensures that if the task is already waiting when the user connects, they get the prompt.
@@ -137,6 +201,11 @@ async def websocket_endpoint(websocket: WebSocket, task_id: str):
while True: while True:
log_entry = await queue.get() log_entry = await queue.get()
# Apply server-side filtering
if not matches_filters(log_entry):
continue
log_dict = log_entry.dict() log_dict = log_entry.dict()
log_dict['timestamp'] = log_dict['timestamp'].isoformat() log_dict['timestamp'] = log_dict['timestamp'].isoformat()
await websocket.send_json(log_dict) await websocket.send_json(log_dict)

View File

@@ -10,7 +10,6 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from pydantic import Field from pydantic import Field
from pydantic_settings import BaseSettings from pydantic_settings import BaseSettings
import os
# [/SECTION] # [/SECTION]
# [DEF:AuthConfig:Class] # [DEF:AuthConfig:Class]
@@ -21,7 +20,7 @@ class AuthConfig(BaseSettings):
# JWT Settings # JWT Settings
SECRET_KEY: str = Field(default="super-secret-key-change-in-production", env="AUTH_SECRET_KEY") SECRET_KEY: str = Field(default="super-secret-key-change-in-production", env="AUTH_SECRET_KEY")
ALGORITHM: str = "HS256" ALGORITHM: str = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES: int = 30 ACCESS_TOKEN_EXPIRE_MINUTES: int = 480
REFRESH_TOKEN_EXPIRE_DAYS: int = 7 REFRESH_TOKEN_EXPIRE_DAYS: int = 7
# Database Settings # Database Settings

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.core.auth.jwt:Module] # [DEF:backend.src.core.auth.jwt:Module]
# #
# @TIER: STANDARD
# @SEMANTICS: jwt, token, session, auth # @SEMANTICS: jwt, token, session, auth
# @PURPOSE: JWT token generation and validation logic. # @PURPOSE: JWT token generation and validation logic.
# @LAYER: Core # @LAYER: Core
@@ -10,8 +11,8 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from datetime import datetime, timedelta from datetime import datetime, timedelta
from typing import Optional, List from typing import Optional
from jose import JWTError, jwt from jose import jwt
from .config import auth_config from .config import auth_config
from ..logger import belief_scope from ..logger import belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.core.auth.logger:Module] # [DEF:backend.src.core.auth.logger:Module]
# #
# @TIER: STANDARD
# @SEMANTICS: auth, logger, audit, security # @SEMANTICS: auth, logger, audit, security
# @PURPOSE: Audit logging for security-related events. # @PURPOSE: Audit logging for security-related events.
# @LAYER: Core # @LAYER: Core

View File

@@ -11,7 +11,7 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from typing import Optional, List from typing import Optional, List
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from ...models.auth import User, Role, Permission, ADGroupMapping from ...models.auth import User, Role, Permission
from ..logger import belief_scope from ..logger import belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -15,7 +15,7 @@ import json
import os import os
from pathlib import Path from pathlib import Path
from typing import Optional, List from typing import Optional, List
from .config_models import AppConfig, Environment, GlobalSettings from .config_models import AppConfig, Environment, GlobalSettings, StorageConfig
from .logger import logger, configure_logger, belief_scope from .logger import logger, configure_logger, belief_scope
# [/SECTION] # [/SECTION]
@@ -46,7 +46,7 @@ class ConfigManager:
# 3. Runtime check of @POST # 3. Runtime check of @POST
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig" assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
logger.info(f"[ConfigManager][Exit] Initialized") logger.info("[ConfigManager][Exit] Initialized")
# [/DEF:__init__:Function] # [/DEF:__init__:Function]
# [DEF:_load_config:Function] # [DEF:_load_config:Function]
@@ -59,7 +59,7 @@ class ConfigManager:
logger.debug(f"[_load_config][Entry] Loading from {self.config_path}") logger.debug(f"[_load_config][Entry] Loading from {self.config_path}")
if not self.config_path.exists(): if not self.config_path.exists():
logger.info(f"[_load_config][Action] Config file not found. Creating default.") logger.info("[_load_config][Action] Config file not found. Creating default.")
default_config = AppConfig( default_config = AppConfig(
environments=[], environments=[],
settings=GlobalSettings() settings=GlobalSettings()
@@ -75,7 +75,7 @@ class ConfigManager:
del data["settings"]["backup_path"] del data["settings"]["backup_path"]
config = AppConfig(**data) config = AppConfig(**data)
logger.info(f"[_load_config][Coherence:OK] Configuration loaded") logger.info("[_load_config][Coherence:OK] Configuration loaded")
return config return config
except Exception as e: except Exception as e:
logger.error(f"[_load_config][Coherence:Failed] Error loading config: {e}") logger.error(f"[_load_config][Coherence:Failed] Error loading config: {e}")
@@ -103,7 +103,7 @@ class ConfigManager:
try: try:
with open(self.config_path, "w") as f: with open(self.config_path, "w") as f:
json.dump(config.dict(), f, indent=4) json.dump(config.dict(), f, indent=4)
logger.info(f"[_save_config_to_disk][Action] Configuration saved") logger.info("[_save_config_to_disk][Action] Configuration saved")
except Exception as e: except Exception as e:
logger.error(f"[_save_config_to_disk][Coherence:Failed] Failed to save: {e}") logger.error(f"[_save_config_to_disk][Coherence:Failed] Failed to save: {e}")
# [/DEF:_save_config_to_disk:Function] # [/DEF:_save_config_to_disk:Function]
@@ -134,7 +134,7 @@ class ConfigManager:
# @PARAM: settings (GlobalSettings) - The new global settings. # @PARAM: settings (GlobalSettings) - The new global settings.
def update_global_settings(self, settings: GlobalSettings): def update_global_settings(self, settings: GlobalSettings):
with belief_scope("update_global_settings"): with belief_scope("update_global_settings"):
logger.info(f"[update_global_settings][Entry] Updating settings") logger.info("[update_global_settings][Entry] Updating settings")
# 1. Runtime check of @PRE # 1. Runtime check of @PRE
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings" assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
@@ -146,7 +146,7 @@ class ConfigManager:
# Reconfigure logger with new settings # Reconfigure logger with new settings
configure_logger(settings.logging) configure_logger(settings.logging)
logger.info(f"[update_global_settings][Exit] Settings updated") logger.info("[update_global_settings][Exit] Settings updated")
# [/DEF:update_global_settings:Function] # [/DEF:update_global_settings:Function]
# [DEF:validate_path:Function] # [DEF:validate_path:Function]
@@ -222,7 +222,7 @@ class ConfigManager:
self.config.environments.append(env) self.config.environments.append(env)
self.save() self.save()
logger.info(f"[add_environment][Exit] Environment added") logger.info("[add_environment][Exit] Environment added")
# [/DEF:add_environment:Function] # [/DEF:add_environment:Function]
# [DEF:update_environment:Function] # [DEF:update_environment:Function]

View File

@@ -1,4 +1,5 @@
# [DEF:ConfigModels:Module] # [DEF:ConfigModels:Module]
# @TIER: STANDARD
# @SEMANTICS: config, models, pydantic # @SEMANTICS: config, models, pydantic
# @PURPOSE: Defines the data models for application configuration using Pydantic. # @PURPOSE: Defines the data models for application configuration using Pydantic.
# @LAYER: Core # @LAYER: Core
@@ -34,6 +35,7 @@ class Environment(BaseModel):
# @PURPOSE: Defines the configuration for the application's logging system. # @PURPOSE: Defines the configuration for the application's logging system.
class LoggingConfig(BaseModel): class LoggingConfig(BaseModel):
level: str = "INFO" level: str = "INFO"
task_log_level: str = "INFO" # Minimum level for task-specific logs (DEBUG, INFO, WARNING, ERROR)
file_path: Optional[str] = "logs/app.log" file_path: Optional[str] = "logs/app.log"
max_bytes: int = 10 * 1024 * 1024 max_bytes: int = 10 * 1024 * 1024
backup_count: int = 5 backup_count: int = 5

View File

@@ -11,31 +11,39 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from sqlalchemy import create_engine from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session from sqlalchemy.orm import sessionmaker
from ..models.mapping import Base from ..models.mapping import Base
# Import models to ensure they're registered with Base # Import models to ensure they're registered with Base
from ..models.task import TaskRecord
from ..models.connection import ConnectionConfig
from ..models.git import GitServerConfig, GitRepository, DeploymentEnvironment
from ..models.auth import User, Role, Permission, ADGroupMapping
from .logger import belief_scope from .logger import belief_scope
from .auth.config import auth_config from .auth.config import auth_config
import os import os
from pathlib import Path
# [/SECTION] # [/SECTION]
# [DEF:BASE_DIR:Variable]
# @PURPOSE: Base directory for the backend (where .db files should reside).
BASE_DIR = Path(__file__).resolve().parent.parent.parent
# [/DEF:BASE_DIR:Variable]
# [DEF:DATABASE_URL:Constant] # [DEF:DATABASE_URL:Constant]
# @PURPOSE: URL for the main mappings database. # @PURPOSE: URL for the main mappings database.
DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///./mappings.db") DATABASE_URL = os.getenv("DATABASE_URL", f"sqlite:///{BASE_DIR}/mappings.db")
# [/DEF:DATABASE_URL:Constant] # [/DEF:DATABASE_URL:Constant]
# [DEF:TASKS_DATABASE_URL:Constant] # [DEF:TASKS_DATABASE_URL:Constant]
# @PURPOSE: URL for the tasks execution database. # @PURPOSE: URL for the tasks execution database.
TASKS_DATABASE_URL = os.getenv("TASKS_DATABASE_URL", "sqlite:///./tasks.db") TASKS_DATABASE_URL = os.getenv("TASKS_DATABASE_URL", f"sqlite:///{BASE_DIR}/tasks.db")
# [/DEF:TASKS_DATABASE_URL:Constant] # [/DEF:TASKS_DATABASE_URL:Constant]
# [DEF:AUTH_DATABASE_URL:Constant] # [DEF:AUTH_DATABASE_URL:Constant]
# @PURPOSE: URL for the authentication database. # @PURPOSE: URL for the authentication database.
AUTH_DATABASE_URL = auth_config.AUTH_DATABASE_URL AUTH_DATABASE_URL = os.getenv("AUTH_DATABASE_URL", auth_config.AUTH_DATABASE_URL)
# If it's a relative sqlite path starting with ./backend/, fix it to be absolute or relative to BASE_DIR
if AUTH_DATABASE_URL.startswith("sqlite:///./backend/"):
AUTH_DATABASE_URL = AUTH_DATABASE_URL.replace("sqlite:///./backend/", f"sqlite:///{BASE_DIR}/")
elif AUTH_DATABASE_URL.startswith("sqlite:///./") and not AUTH_DATABASE_URL.startswith("sqlite:///./backend/"):
# If it's just ./ but we are in backend, it's fine, but let's make it absolute for robustness
AUTH_DATABASE_URL = AUTH_DATABASE_URL.replace("sqlite:///./", f"sqlite:///{BASE_DIR}/")
# [/DEF:AUTH_DATABASE_URL:Constant] # [/DEF:AUTH_DATABASE_URL:Constant]
# [DEF:engine:Variable] # [DEF:engine:Variable]

View File

@@ -19,6 +19,9 @@ _belief_state = threading.local()
# Global flag for belief state logging # Global flag for belief state logging
_enable_belief_state = True _enable_belief_state = True
# Global task log level filter
_task_log_level = "INFO"
# [DEF:BeliefFormatter:Class] # [DEF:BeliefFormatter:Class]
# @PURPOSE: Custom logging formatter that adds belief state prefixes to log messages. # @PURPOSE: Custom logging formatter that adds belief state prefixes to log messages.
class BeliefFormatter(logging.Formatter): class BeliefFormatter(logging.Formatter):
@@ -58,12 +61,12 @@ class LogEntry(BaseModel):
# @SEMANTICS: logging, context, belief_state # @SEMANTICS: logging, context, belief_state
@contextmanager @contextmanager
def belief_scope(anchor_id: str, message: str = ""): def belief_scope(anchor_id: str, message: str = ""):
# Log Entry if enabled # Log Entry if enabled (DEBUG level to reduce noise)
if _enable_belief_state: if _enable_belief_state:
entry_msg = f"[{anchor_id}][Entry]" entry_msg = f"[{anchor_id}][Entry]"
if message: if message:
entry_msg += f" {message}" entry_msg += f" {message}"
logger.info(entry_msg) logger.debug(entry_msg)
# Set thread-local anchor_id # Set thread-local anchor_id
old_anchor = getattr(_belief_state, 'anchor_id', None) old_anchor = getattr(_belief_state, 'anchor_id', None)
@@ -71,13 +74,13 @@ def belief_scope(anchor_id: str, message: str = ""):
try: try:
yield yield
# Log Coherence OK and Exit # Log Coherence OK and Exit (DEBUG level to reduce noise)
logger.info(f"[{anchor_id}][Coherence:OK]") logger.debug(f"[{anchor_id}][Coherence:OK]")
if _enable_belief_state: if _enable_belief_state:
logger.info(f"[{anchor_id}][Exit]") logger.debug(f"[{anchor_id}][Exit]")
except Exception as e: except Exception as e:
# Log Coherence Failed # Log Coherence Failed (DEBUG level to reduce noise)
logger.info(f"[{anchor_id}][Coherence:Failed] {str(e)}") logger.debug(f"[{anchor_id}][Coherence:Failed] {str(e)}")
raise raise
finally: finally:
# Restore old anchor # Restore old anchor
@@ -88,12 +91,13 @@ def belief_scope(anchor_id: str, message: str = ""):
# [DEF:configure_logger:Function] # [DEF:configure_logger:Function]
# @PURPOSE: Configures the logger with the provided logging settings. # @PURPOSE: Configures the logger with the provided logging settings.
# @PRE: config is a valid LoggingConfig instance. # @PRE: config is a valid LoggingConfig instance.
# @POST: Logger level, handlers, and belief state flag are updated. # @POST: Logger level, handlers, belief state flag, and task log level are updated.
# @PARAM: config (LoggingConfig) - The logging configuration. # @PARAM: config (LoggingConfig) - The logging configuration.
# @SEMANTICS: logging, configuration, initialization # @SEMANTICS: logging, configuration, initialization
def configure_logger(config): def configure_logger(config):
global _enable_belief_state global _enable_belief_state, _task_log_level
_enable_belief_state = config.enable_belief_state _enable_belief_state = config.enable_belief_state
_task_log_level = config.task_log_level.upper()
# Set logger level # Set logger level
level = getattr(logging, config.level.upper(), logging.INFO) level = getattr(logging, config.level.upper(), logging.INFO)
@@ -107,7 +111,6 @@ def configure_logger(config):
# Add file handler if file_path is set # Add file handler if file_path is set
if config.file_path: if config.file_path:
import os
from pathlib import Path from pathlib import Path
log_file = Path(config.file_path) log_file = Path(config.file_path)
log_file.parent.mkdir(parents=True, exist_ok=True) log_file.parent.mkdir(parents=True, exist_ok=True)
@@ -130,6 +133,36 @@ def configure_logger(config):
)) ))
# [/DEF:configure_logger:Function] # [/DEF:configure_logger:Function]
# [DEF:get_task_log_level:Function]
# @PURPOSE: Returns the current task log level filter.
# @PRE: None.
# @POST: Returns the task log level string.
# @RETURN: str - The current task log level (DEBUG, INFO, WARNING, ERROR).
# @SEMANTICS: logging, configuration, getter
def get_task_log_level() -> str:
"""Returns the current task log level filter."""
return _task_log_level
# [/DEF:get_task_log_level:Function]
# [DEF:should_log_task_level:Function]
# @PURPOSE: Checks if a log level should be recorded based on task_log_level setting.
# @PRE: level is a valid log level string.
# @POST: Returns True if level meets or exceeds task_log_level threshold.
# @PARAM: level (str) - The log level to check.
# @RETURN: bool - True if the level should be logged.
# @SEMANTICS: logging, filter, level
def should_log_task_level(level: str) -> bool:
"""Checks if a log level should be recorded based on task_log_level setting."""
level_order = {"DEBUG": 0, "INFO": 1, "WARNING": 2, "ERROR": 3}
current_level = _task_log_level.upper()
check_level = level.upper()
current_order = level_order.get(current_level, 1) # Default to INFO
check_order = level_order.get(check_level, 1)
return check_order >= current_order
# [/DEF:should_log_task_level:Function]
# [DEF:WebSocketLogHandler:Class] # [DEF:WebSocketLogHandler:Class]
# @SEMANTICS: logging, handler, websocket, buffer # @SEMANTICS: logging, handler, websocket, buffer
# @PURPOSE: A custom logging handler that captures log records into a buffer. It is designed to be extended for real-time log streaming over WebSockets. # @PURPOSE: A custom logging handler that captures log records into a buffer. It is designed to be extended for real-time log streaming over WebSockets.

View File

@@ -11,12 +11,10 @@
import zipfile import zipfile
import yaml import yaml
import os import os
import shutil
import tempfile import tempfile
from pathlib import Path from pathlib import Path
from typing import Dict from typing import Dict
from .logger import logger, belief_scope from .logger import logger, belief_scope
import yaml
# [/SECTION] # [/SECTION]
# [DEF:MigrationEngine:Class] # [DEF:MigrationEngine:Class]

View File

@@ -1,12 +1,12 @@
import importlib.util import importlib.util
import os import os
import sys # Added this line import sys # Added this line
from typing import Dict, Type, List, Optional from typing import Dict, List, Optional
from .plugin_base import PluginBase, PluginConfig from .plugin_base import PluginBase, PluginConfig
from jsonschema import validate
from .logger import belief_scope from .logger import belief_scope
# [DEF:PluginLoader:Class] # [DEF:PluginLoader:Class]
# @TIER: STANDARD
# @SEMANTICS: plugin, loader, dynamic, import # @SEMANTICS: plugin, loader, dynamic, import
# @PURPOSE: Scans a specified directory for Python modules, dynamically loads them, and registers any classes that are valid implementations of the PluginBase interface. # @PURPOSE: Scans a specified directory for Python modules, dynamically loads them, and registers any classes that are valid implementations of the PluginBase interface.
# @LAYER: Core # @LAYER: Core

View File

@@ -1,4 +1,5 @@
# [DEF:SchedulerModule:Module] # [DEF:SchedulerModule:Module]
# @TIER: STANDARD
# @SEMANTICS: scheduler, apscheduler, cron, backup # @SEMANTICS: scheduler, apscheduler, cron, backup
# @PURPOSE: Manages scheduled tasks using APScheduler. # @PURPOSE: Manages scheduled tasks using APScheduler.
# @LAYER: Core # @LAYER: Core
@@ -9,11 +10,11 @@ from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger from apscheduler.triggers.cron import CronTrigger
from .logger import logger, belief_scope from .logger import logger, belief_scope
from .config_manager import ConfigManager from .config_manager import ConfigManager
from typing import Optional
import asyncio import asyncio
# [/SECTION] # [/SECTION]
# [DEF:SchedulerService:Class] # [DEF:SchedulerService:Class]
# @TIER: STANDARD
# @SEMANTICS: scheduler, service, apscheduler # @SEMANTICS: scheduler, service, apscheduler
# @PURPOSE: Provides a service to manage scheduled backup tasks. # @PURPOSE: Provides a service to manage scheduled backup tasks.
class SchedulerService: class SchedulerService:

View File

@@ -13,10 +13,10 @@
import json import json
import zipfile import zipfile
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union, cast from typing import Dict, List, Optional, Tuple, Union, cast
from requests import Response from requests import Response
from .logger import logger as app_logger, belief_scope from .logger import logger as app_logger, belief_scope
from .utils.network import APIClient, SupersetAPIError, AuthenticationError, DashboardNotFoundError, NetworkError from .utils.network import APIClient, SupersetAPIError
from .utils.fileio import get_filename_from_headers from .utils.fileio import get_filename_from_headers
from .config_models import Environment from .config_models import Environment
# [/SECTION] # [/SECTION]
@@ -212,6 +212,30 @@ class SupersetClient:
return total_count, paginated_data return total_count, paginated_data
# [/DEF:get_datasets:Function] # [/DEF:get_datasets:Function]
# [DEF:get_datasets_summary:Function]
# @PURPOSE: Fetches dataset metadata optimized for the Dataset Hub grid.
# @PRE: Client is authenticated.
# @POST: Returns a list of dataset metadata summaries.
# @RETURN: List[Dict]
def get_datasets_summary(self) -> List[Dict]:
with belief_scope("SupersetClient.get_datasets_summary"):
query = {
"columns": ["id", "table_name", "schema", "database"]
}
_, datasets = self.get_datasets(query=query)
# Map fields to match the contracts
result = []
for ds in datasets:
result.append({
"id": ds.get("id"),
"table_name": ds.get("table_name"),
"schema": ds.get("schema"),
"database": ds.get("database", {}).get("database_name", "Unknown")
})
return result
# [/DEF:get_datasets_summary:Function]
# [DEF:get_dataset:Function] # [DEF:get_dataset:Function]
# @PURPOSE: Получает информацию о конкретном датасете по его ID. # @PURPOSE: Получает информацию о конкретном датасете по его ID.
# @PARAM: dataset_id (int) - ID датасета. # @PARAM: dataset_id (int) - ID датасета.

View File

@@ -1,4 +1,5 @@
# [DEF:TaskManagerPackage:Module] # [DEF:TaskManagerPackage:Module]
# @TIER: TRIVIAL
# @SEMANTICS: task, manager, package, exports # @SEMANTICS: task, manager, package, exports
# @PURPOSE: Exports the public API of the task manager package. # @PURPOSE: Exports the public API of the task manager package.
# @LAYER: Core # @LAYER: Core

View File

@@ -1,47 +1,75 @@
# [DEF:TaskCleanupModule:Module] # [DEF:TaskCleanupModule:Module]
# @SEMANTICS: task, cleanup, retention # @TIER: STANDARD
# @PURPOSE: Implements task cleanup and retention policies. # @SEMANTICS: task, cleanup, retention, logs
# @PURPOSE: Implements task cleanup and retention policies, including associated logs.
# @LAYER: Core # @LAYER: Core
# @RELATION: Uses TaskPersistenceService to delete old tasks. # @RELATION: Uses TaskPersistenceService and TaskLogPersistenceService to delete old tasks and logs.
from datetime import datetime, timedelta from typing import List
from .persistence import TaskPersistenceService from .persistence import TaskPersistenceService, TaskLogPersistenceService
from ..logger import logger, belief_scope from ..logger import logger, belief_scope
from ..config_manager import ConfigManager from ..config_manager import ConfigManager
# [DEF:TaskCleanupService:Class] # [DEF:TaskCleanupService:Class]
# @PURPOSE: Provides methods to clean up old task records. # @PURPOSE: Provides methods to clean up old task records and their associated logs.
# @TIER: STANDARD
class TaskCleanupService: class TaskCleanupService:
# [DEF:__init__:Function] # [DEF:__init__:Function]
# @PURPOSE: Initializes the cleanup service with dependencies. # @PURPOSE: Initializes the cleanup service with dependencies.
# @PRE: persistence_service and config_manager are valid. # @PRE: persistence_service and config_manager are valid.
# @POST: Cleanup service is ready. # @POST: Cleanup service is ready.
def __init__(self, persistence_service: TaskPersistenceService, config_manager: ConfigManager): def __init__(
self,
persistence_service: TaskPersistenceService,
log_persistence_service: TaskLogPersistenceService,
config_manager: ConfigManager
):
self.persistence_service = persistence_service self.persistence_service = persistence_service
self.log_persistence_service = log_persistence_service
self.config_manager = config_manager self.config_manager = config_manager
# [/DEF:__init__:Function] # [/DEF:__init__:Function]
# [DEF:run_cleanup:Function] # [DEF:run_cleanup:Function]
# @PURPOSE: Deletes tasks older than the configured retention period. # @PURPOSE: Deletes tasks older than the configured retention period and their logs.
# @PRE: Config manager has valid settings. # @PRE: Config manager has valid settings.
# @POST: Old tasks are deleted from persistence. # @POST: Old tasks and their logs are deleted from persistence.
def run_cleanup(self): def run_cleanup(self):
with belief_scope("TaskCleanupService.run_cleanup"): with belief_scope("TaskCleanupService.run_cleanup"):
settings = self.config_manager.get_config().settings settings = self.config_manager.get_config().settings
retention_days = settings.task_retention_days retention_days = settings.task_retention_days
# This is a simplified implementation.
# In a real scenario, we would query IDs of tasks older than retention_days.
# For now, we'll log the action.
logger.info(f"Cleaning up tasks older than {retention_days} days.") logger.info(f"Cleaning up tasks older than {retention_days} days.")
# Re-loading tasks to check for limit # Load tasks to check for limit
tasks = self.persistence_service.load_tasks(limit=1000) tasks = self.persistence_service.load_tasks(limit=1000)
if len(tasks) > settings.task_retention_limit: if len(tasks) > settings.task_retention_limit:
to_delete = [t.id for t in tasks[settings.task_retention_limit:]] to_delete: List[str] = [t.id for t in tasks[settings.task_retention_limit:]]
# Delete logs first (before task records)
self.log_persistence_service.delete_logs_for_tasks(to_delete)
# Then delete task records
self.persistence_service.delete_tasks(to_delete) self.persistence_service.delete_tasks(to_delete)
logger.info(f"Deleted {len(to_delete)} tasks exceeding limit of {settings.task_retention_limit}")
logger.info(f"Deleted {len(to_delete)} tasks and their logs exceeding limit of {settings.task_retention_limit}")
# [/DEF:run_cleanup:Function] # [/DEF:run_cleanup:Function]
# [DEF:delete_task_with_logs:Function]
# @PURPOSE: Delete a single task and all its associated logs.
# @PRE: task_id is a valid task ID.
# @POST: Task and all its logs are deleted.
# @PARAM: task_id (str) - The task ID to delete.
def delete_task_with_logs(self, task_id: str) -> None:
"""Delete a single task and all its associated logs."""
with belief_scope("TaskCleanupService.delete_task_with_logs", f"task_id={task_id}"):
# Delete logs first
self.log_persistence_service.delete_logs_for_task(task_id)
# Then delete task record
self.persistence_service.delete_tasks([task_id])
logger.info(f"Deleted task {task_id} and its associated logs")
# [/DEF:delete_task_with_logs:Function]
# [/DEF:TaskCleanupService:Class] # [/DEF:TaskCleanupService:Class]
# [/DEF:TaskCleanupModule:Module] # [/DEF:TaskCleanupModule:Module]

View File

@@ -0,0 +1,115 @@
# [DEF:TaskContextModule:Module]
# @SEMANTICS: task, context, plugin, execution, logger
# @PURPOSE: Provides execution context passed to plugins during task execution.
# @LAYER: Core
# @RELATION: DEPENDS_ON -> TaskLogger, USED_BY -> plugins
# @TIER: CRITICAL
# @INVARIANT: Each TaskContext is bound to a single task execution.
# [SECTION: IMPORTS]
from typing import Dict, Any, Callable
from .task_logger import TaskLogger
# [/SECTION]
# [DEF:TaskContext:Class]
# @SEMANTICS: context, task, execution, plugin
# @PURPOSE: A container passed to plugin.execute() providing the logger and other task-specific utilities.
# @TIER: CRITICAL
# @INVARIANT: logger is always a valid TaskLogger instance.
# @UX_STATE: Idle -> Active -> Complete
class TaskContext:
"""
Execution context provided to plugins during task execution.
Usage:
def execute(params: dict, context: TaskContext = None):
if context:
context.logger.info("Starting process")
context.logger.progress("Processing items", percent=50)
# ... plugin logic
"""
# [DEF:__init__:Function]
# @PURPOSE: Initialize the TaskContext with task-specific resources.
# @PRE: task_id is a valid task identifier, add_log_fn is callable.
# @POST: TaskContext is ready to be passed to plugin.execute().
# @PARAM: task_id (str) - The ID of the task.
# @PARAM: add_log_fn (Callable) - Function to add log to TaskManager.
# @PARAM: params (Dict) - Task parameters.
# @PARAM: default_source (str) - Default source for logs (default: "plugin").
def __init__(
self,
task_id: str,
add_log_fn: Callable,
params: Dict[str, Any],
default_source: str = "plugin"
):
self._task_id = task_id
self._params = params
self._logger = TaskLogger(
task_id=task_id,
add_log_fn=add_log_fn,
source=default_source
)
# [/DEF:__init__:Function]
# [DEF:task_id:Function]
# @PURPOSE: Get the task ID.
# @PRE: TaskContext must be initialized.
# @POST: Returns the task ID string.
# @RETURN: str - The task ID.
@property
def task_id(self) -> str:
return self._task_id
# [/DEF:task_id:Function]
# [DEF:logger:Function]
# @PURPOSE: Get the TaskLogger instance for this context.
# @PRE: TaskContext must be initialized.
# @POST: Returns the TaskLogger instance.
# @RETURN: TaskLogger - The logger instance.
@property
def logger(self) -> TaskLogger:
return self._logger
# [/DEF:logger:Function]
# [DEF:params:Function]
# @PURPOSE: Get the task parameters.
# @PRE: TaskContext must be initialized.
# @POST: Returns the parameters dictionary.
# @RETURN: Dict[str, Any] - The task parameters.
@property
def params(self) -> Dict[str, Any]:
return self._params
# [/DEF:params:Function]
# [DEF:get_param:Function]
# @PURPOSE: Get a specific parameter value with optional default.
# @PRE: TaskContext must be initialized.
# @POST: Returns parameter value or default.
# @PARAM: key (str) - Parameter key.
# @PARAM: default (Any) - Default value if key not found.
# @RETURN: Any - Parameter value or default.
def get_param(self, key: str, default: Any = None) -> Any:
return self._params.get(key, default)
# [/DEF:get_param:Function]
# [DEF:create_sub_context:Function]
# @PURPOSE: Create a sub-context with a different default source.
# @PRE: source is a non-empty string.
# @POST: Returns new TaskContext with different logger source.
# @PARAM: source (str) - New default source for logging.
# @RETURN: TaskContext - New context with different source.
def create_sub_context(self, source: str) -> "TaskContext":
"""Create a sub-context with a different default source for logging."""
return TaskContext(
task_id=self._task_id,
add_log_fn=self._logger._add_log,
params=self._params,
default_source=source
)
# [/DEF:create_sub_context:Function]
# [/DEF:TaskContext:Class]
# [/DEF:TaskContextModule:Module]

View File

@@ -8,23 +8,33 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
import asyncio import asyncio
import threading
import inspect
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime from datetime import datetime
from typing import Dict, Any, List, Optional from typing import Dict, Any, List, Optional
from concurrent.futures import ThreadPoolExecutor
from .models import Task, TaskStatus, LogEntry from .models import Task, TaskStatus, LogEntry, LogFilter, LogStats
from .persistence import TaskPersistenceService from .persistence import TaskPersistenceService, TaskLogPersistenceService
from ..logger import logger, belief_scope from .context import TaskContext
from ..logger import logger, belief_scope, should_log_task_level
# [/SECTION] # [/SECTION]
# [DEF:TaskManager:Class] # [DEF:TaskManager:Class]
# @SEMANTICS: task, manager, lifecycle, execution, state # @SEMANTICS: task, manager, lifecycle, execution, state
# @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking. # @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking.
# @TIER: CRITICAL
# @INVARIANT: Task IDs are unique within the registry.
# @INVARIANT: Each task has exactly one status at any time.
# @INVARIANT: Log entries are never deleted after being added to a task.
class TaskManager: class TaskManager:
""" """
Manages the lifecycle of tasks, including their creation, execution, and state tracking. Manages the lifecycle of tasks, including their creation, execution, and state tracking.
""" """
# Log flush interval in seconds
LOG_FLUSH_INTERVAL = 2.0
# [DEF:__init__:Function] # [DEF:__init__:Function]
# @PURPOSE: Initialize the TaskManager with dependencies. # @PURPOSE: Initialize the TaskManager with dependencies.
# @PRE: plugin_loader is initialized. # @PRE: plugin_loader is initialized.
@@ -35,8 +45,18 @@ class TaskManager:
self.plugin_loader = plugin_loader self.plugin_loader = plugin_loader
self.tasks: Dict[str, Task] = {} self.tasks: Dict[str, Task] = {}
self.subscribers: Dict[str, List[asyncio.Queue]] = {} self.subscribers: Dict[str, List[asyncio.Queue]] = {}
self.executor = ThreadPoolExecutor(max_workers=5) # For CPU-bound plugin execution self.executor = ThreadPoolExecutor(max_workers=5) # For CPU-bound plugin execution
self.persistence_service = TaskPersistenceService() self.persistence_service = TaskPersistenceService()
self.log_persistence_service = TaskLogPersistenceService()
# Log buffer: task_id -> List[LogEntry]
self._log_buffer: Dict[str, List[LogEntry]] = {}
self._log_buffer_lock = threading.Lock()
# Flusher thread for batch writing logs
self._flusher_stop_event = threading.Event()
self._flusher_thread = threading.Thread(target=self._flusher_loop, daemon=True)
self._flusher_thread.start()
try: try:
self.loop = asyncio.get_running_loop() self.loop = asyncio.get_running_loop()
@@ -47,6 +67,59 @@ class TaskManager:
# Load persisted tasks on startup # Load persisted tasks on startup
self.load_persisted_tasks() self.load_persisted_tasks()
# [/DEF:__init__:Function] # [/DEF:__init__:Function]
# [DEF:_flusher_loop:Function]
# @PURPOSE: Background thread that periodically flushes log buffer to database.
# @PRE: TaskManager is initialized.
# @POST: Logs are batch-written to database every LOG_FLUSH_INTERVAL seconds.
def _flusher_loop(self):
"""Background thread that flushes log buffer to database."""
while not self._flusher_stop_event.is_set():
self._flush_logs()
self._flusher_stop_event.wait(self.LOG_FLUSH_INTERVAL)
# [/DEF:_flusher_loop:Function]
# [DEF:_flush_logs:Function]
# @PURPOSE: Flush all buffered logs to the database.
# @PRE: None.
# @POST: All buffered logs are written to task_logs table.
def _flush_logs(self):
"""Flush all buffered logs to the database."""
with self._log_buffer_lock:
task_ids = list(self._log_buffer.keys())
for task_id in task_ids:
with self._log_buffer_lock:
logs = self._log_buffer.pop(task_id, [])
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
# Re-add logs to buffer on failure
with self._log_buffer_lock:
if task_id not in self._log_buffer:
self._log_buffer[task_id] = []
self._log_buffer[task_id].extend(logs)
# [/DEF:_flush_logs:Function]
# [DEF:_flush_task_logs:Function]
# @PURPOSE: Flush logs for a specific task immediately.
# @PRE: task_id exists.
# @POST: Task's buffered logs are written to database.
# @PARAM: task_id (str) - The task ID.
def _flush_task_logs(self, task_id: str):
"""Flush logs for a specific task immediately."""
with self._log_buffer_lock:
logs = self._log_buffer.pop(task_id, [])
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
# [/DEF:_flush_task_logs:Function]
# [DEF:create_task:Function] # [DEF:create_task:Function]
# @PURPOSE: Creates and queues a new task for execution. # @PURPOSE: Creates and queues a new task for execution.
@@ -63,7 +136,7 @@ class TaskManager:
logger.error(f"Plugin with ID '{plugin_id}' not found.") logger.error(f"Plugin with ID '{plugin_id}' not found.")
raise ValueError(f"Plugin with ID '{plugin_id}' not found.") raise ValueError(f"Plugin with ID '{plugin_id}' not found.")
plugin = self.plugin_loader.get_plugin(plugin_id) self.plugin_loader.get_plugin(plugin_id)
if not isinstance(params, dict): if not isinstance(params, dict):
logger.error("Task parameters must be a dictionary.") logger.error("Task parameters must be a dictionary.")
@@ -78,7 +151,7 @@ class TaskManager:
# [/DEF:create_task:Function] # [/DEF:create_task:Function]
# [DEF:_run_task:Function] # [DEF:_run_task:Function]
# @PURPOSE: Internal method to execute a task. # @PURPOSE: Internal method to execute a task with TaskContext support.
# @PRE: Task exists in registry. # @PRE: Task exists in registry.
# @POST: Task is executed, status updated to SUCCESS or FAILED. # @POST: Task is executed, status updated to SUCCESS or FAILED.
# @PARAM: task_id (str) - The ID of the task to run. # @PARAM: task_id (str) - The ID of the task to run.
@@ -91,30 +164,54 @@ class TaskManager:
task.status = TaskStatus.RUNNING task.status = TaskStatus.RUNNING
task.started_at = datetime.utcnow() task.started_at = datetime.utcnow()
self.persistence_service.persist_task(task) self.persistence_service.persist_task(task)
self._add_log(task_id, "INFO", f"Task started for plugin '{plugin.name}'") self._add_log(task_id, "INFO", f"Task started for plugin '{plugin.name}'", source="system")
try: try:
# Execute plugin # Prepare params and check if plugin supports new TaskContext
params = {**task.params, "_task_id": task_id} params = {**task.params, "_task_id": task_id}
if asyncio.iscoroutinefunction(plugin.execute): # Check if plugin's execute method accepts 'context' parameter
task.result = await plugin.execute(params) sig = inspect.signature(plugin.execute)
else: accepts_context = 'context' in sig.parameters
task.result = await self.loop.run_in_executor(
self.executor, if accepts_context:
plugin.execute, # Create TaskContext for new-style plugins
params context = TaskContext(
task_id=task_id,
add_log_fn=self._add_log,
params=params,
default_source="plugin"
) )
if asyncio.iscoroutinefunction(plugin.execute):
task.result = await plugin.execute(params, context=context)
else:
task.result = await self.loop.run_in_executor(
self.executor,
lambda: plugin.execute(params, context=context)
)
else:
# Backward compatibility: old-style plugins without context
if asyncio.iscoroutinefunction(plugin.execute):
task.result = await plugin.execute(params)
else:
task.result = await self.loop.run_in_executor(
self.executor,
plugin.execute,
params
)
logger.info(f"Task {task_id} completed successfully") logger.info(f"Task {task_id} completed successfully")
task.status = TaskStatus.SUCCESS task.status = TaskStatus.SUCCESS
self._add_log(task_id, "INFO", f"Task completed successfully for plugin '{plugin.name}'") self._add_log(task_id, "INFO", f"Task completed successfully for plugin '{plugin.name}'", source="system")
except Exception as e: except Exception as e:
logger.error(f"Task {task_id} failed: {e}") logger.error(f"Task {task_id} failed: {e}")
task.status = TaskStatus.FAILED task.status = TaskStatus.FAILED
self._add_log(task_id, "ERROR", f"Task failed: {e}", {"error_type": type(e).__name__}) self._add_log(task_id, "ERROR", f"Task failed: {e}", source="system", metadata={"error_type": type(e).__name__})
finally: finally:
task.finished_at = datetime.utcnow() task.finished_at = datetime.utcnow()
# Flush any remaining buffered logs before persisting task
self._flush_task_logs(task_id)
self.persistence_service.persist_task(task) self.persistence_service.persist_task(task)
logger.info(f"Task {task_id} execution finished with status: {task.status}") logger.info(f"Task {task_id} execution finished with status: {task.status}")
# [/DEF:_run_task:Function] # [/DEF:_run_task:Function]
@@ -151,7 +248,8 @@ class TaskManager:
async def wait_for_resolution(self, task_id: str): async def wait_for_resolution(self, task_id: str):
with belief_scope("TaskManager.wait_for_resolution", f"task_id={task_id}"): with belief_scope("TaskManager.wait_for_resolution", f"task_id={task_id}"):
task = self.tasks.get(task_id) task = self.tasks.get(task_id)
if not task: return if not task:
return
task.status = TaskStatus.AWAITING_MAPPING task.status = TaskStatus.AWAITING_MAPPING
self.persistence_service.persist_task(task) self.persistence_service.persist_task(task)
@@ -172,7 +270,8 @@ class TaskManager:
async def wait_for_input(self, task_id: str): async def wait_for_input(self, task_id: str):
with belief_scope("TaskManager.wait_for_input", f"task_id={task_id}"): with belief_scope("TaskManager.wait_for_input", f"task_id={task_id}"):
task = self.tasks.get(task_id) task = self.tasks.get(task_id)
if not task: return if not task:
return
# Status is already set to AWAITING_INPUT by await_input() # Status is already set to AWAITING_INPUT by await_input()
self.task_futures[task_id] = self.loop.create_future() self.task_futures[task_id] = self.loop.create_future()
@@ -224,36 +323,106 @@ class TaskManager:
# [/DEF:get_tasks:Function] # [/DEF:get_tasks:Function]
# [DEF:get_task_logs:Function] # [DEF:get_task_logs:Function]
# @PURPOSE: Retrieves logs for a specific task. # @PURPOSE: Retrieves logs for a specific task (from memory for running, persistence for completed).
# @PRE: task_id is a string. # @PRE: task_id is a string.
# @POST: Returns list of LogEntry objects. # @POST: Returns list of LogEntry or TaskLog objects.
# @PARAM: task_id (str) - ID of the task. # @PARAM: task_id (str) - ID of the task.
# @PARAM: log_filter (Optional[LogFilter]) - Filter parameters.
# @RETURN: List[LogEntry] - List of log entries. # @RETURN: List[LogEntry] - List of log entries.
def get_task_logs(self, task_id: str) -> List[LogEntry]: def get_task_logs(self, task_id: str, log_filter: Optional[LogFilter] = None) -> List[LogEntry]:
with belief_scope("TaskManager.get_task_logs", f"task_id={task_id}"): with belief_scope("TaskManager.get_task_logs", f"task_id={task_id}"):
task = self.tasks.get(task_id) task = self.tasks.get(task_id)
# For completed tasks, fetch from persistence
if task and task.status in [TaskStatus.SUCCESS, TaskStatus.FAILED]:
if log_filter is None:
log_filter = LogFilter()
task_logs = self.log_persistence_service.get_logs(task_id, log_filter)
# Convert TaskLog to LogEntry for backward compatibility
return [
LogEntry(
timestamp=log.timestamp,
level=log.level,
message=log.message,
source=log.source,
metadata=log.metadata
)
for log in task_logs
]
# For running/pending tasks, return from memory
return task.logs if task else [] return task.logs if task else []
# [/DEF:get_task_logs:Function] # [/DEF:get_task_logs:Function]
# [DEF:get_task_log_stats:Function]
# @PURPOSE: Get statistics about logs for a task.
# @PRE: task_id is a valid task ID.
# @POST: Returns LogStats with counts by level and source.
# @PARAM: task_id (str) - The task ID.
# @RETURN: LogStats - Statistics about task logs.
def get_task_log_stats(self, task_id: str) -> LogStats:
with belief_scope("TaskManager.get_task_log_stats", f"task_id={task_id}"):
return self.log_persistence_service.get_log_stats(task_id)
# [/DEF:get_task_log_stats:Function]
# [DEF:get_task_log_sources:Function]
# @PURPOSE: Get unique sources for a task's logs.
# @PRE: task_id is a valid task ID.
# @POST: Returns list of unique source strings.
# @PARAM: task_id (str) - The task ID.
# @RETURN: List[str] - Unique source names.
def get_task_log_sources(self, task_id: str) -> List[str]:
with belief_scope("TaskManager.get_task_log_sources", f"task_id={task_id}"):
return self.log_persistence_service.get_sources(task_id)
# [/DEF:get_task_log_sources:Function]
# [DEF:_add_log:Function] # [DEF:_add_log:Function]
# @PURPOSE: Adds a log entry to a task and notifies subscribers. # @PURPOSE: Adds a log entry to a task buffer and notifies subscribers.
# @PRE: Task exists. # @PRE: Task exists.
# @POST: Log added to task and pushed to queues. # @POST: Log added to buffer and pushed to queues (if level meets task_log_level filter).
# @PARAM: task_id (str) - ID of the task. # @PARAM: task_id (str) - ID of the task.
# @PARAM: level (str) - Log level. # @PARAM: level (str) - Log level.
# @PARAM: message (str) - Log message. # @PARAM: message (str) - Log message.
# @PARAM: context (Optional[Dict]) - Log context. # @PARAM: source (str) - Source component (default: "system").
def _add_log(self, task_id: str, level: str, message: str, context: Optional[Dict[str, Any]] = None): # @PARAM: metadata (Optional[Dict]) - Additional structured data.
# @PARAM: context (Optional[Dict]) - Legacy context (for backward compatibility).
def _add_log(
self,
task_id: str,
level: str,
message: str,
source: str = "system",
metadata: Optional[Dict[str, Any]] = None,
context: Optional[Dict[str, Any]] = None
):
with belief_scope("TaskManager._add_log", f"task_id={task_id}"): with belief_scope("TaskManager._add_log", f"task_id={task_id}"):
task = self.tasks.get(task_id) task = self.tasks.get(task_id)
if not task: if not task:
return return
log_entry = LogEntry(level=level, message=message, context=context) # Filter logs based on task_log_level configuration
task.logs.append(log_entry) if not should_log_task_level(level):
self.persistence_service.persist_task(task) return
# Notify subscribers # Create log entry with new fields
log_entry = LogEntry(
level=level,
message=message,
source=source,
metadata=metadata,
context=context # Keep for backward compatibility
)
# Add to in-memory logs (for backward compatibility with legacy JSON field)
task.logs.append(log_entry)
# Add to buffer for batch persistence
with self._log_buffer_lock:
if task_id not in self._log_buffer:
self._log_buffer[task_id] = []
self._log_buffer[task_id].append(log_entry)
# Notify subscribers (for real-time WebSocket updates)
if task_id in self.subscribers: if task_id in self.subscribers:
for queue in self.subscribers[task_id]: for queue in self.subscribers[task_id]:
self.loop.call_soon_threadsafe(queue.put_nowait, log_entry) self.loop.call_soon_threadsafe(queue.put_nowait, log_entry)
@@ -353,7 +522,7 @@ class TaskManager:
# [/DEF:resume_task_with_password:Function] # [/DEF:resume_task_with_password:Function]
# [DEF:clear_tasks:Function] # [DEF:clear_tasks:Function]
# @PURPOSE: Clears tasks based on status filter. # @PURPOSE: Clears tasks based on status filter (also deletes associated logs).
# @PRE: status is Optional[TaskStatus]. # @PRE: status is Optional[TaskStatus].
# @POST: Tasks matching filter (or all non-active) cleared from registry and database. # @POST: Tasks matching filter (or all non-active) cleared from registry and database.
# @PARAM: status (Optional[TaskStatus]) - Filter by task status. # @PARAM: status (Optional[TaskStatus]) - Filter by task status.
@@ -387,9 +556,13 @@ class TaskManager:
del self.tasks[tid] del self.tasks[tid]
# Remove from persistence # Remove from persistence (task_records and task_logs via CASCADE)
self.persistence_service.delete_tasks(tasks_to_remove) self.persistence_service.delete_tasks(tasks_to_remove)
# Also explicitly delete logs (in case CASCADE is not set up)
if tasks_to_remove:
self.log_persistence_service.delete_logs_for_tasks(tasks_to_remove)
logger.info(f"Cleared {len(tasks_to_remove)} tasks.") logger.info(f"Cleared {len(tasks_to_remove)} tasks.")
return len(tasks_to_remove) return len(tasks_to_remove)
# [/DEF:clear_tasks:Function] # [/DEF:clear_tasks:Function]

View File

@@ -1,4 +1,5 @@
# [DEF:TaskManagerModels:Module] # [DEF:TaskManagerModels:Module]
# @TIER: STANDARD
# @SEMANTICS: task, models, pydantic, enum, state # @SEMANTICS: task, models, pydantic, enum, state
# @PURPOSE: Defines the data models and enumerations used by the Task Manager. # @PURPOSE: Defines the data models and enumerations used by the Task Manager.
# @LAYER: Core # @LAYER: Core
@@ -16,6 +17,7 @@ from pydantic import BaseModel, Field
# [/SECTION] # [/SECTION]
# [DEF:TaskStatus:Enum] # [DEF:TaskStatus:Enum]
# @TIER: TRIVIAL
# @SEMANTICS: task, status, state, enum # @SEMANTICS: task, status, state, enum
# @PURPOSE: Defines the possible states a task can be in during its lifecycle. # @PURPOSE: Defines the possible states a task can be in during its lifecycle.
class TaskStatus(str, Enum): class TaskStatus(str, Enum):
@@ -27,17 +29,73 @@ class TaskStatus(str, Enum):
AWAITING_INPUT = "AWAITING_INPUT" AWAITING_INPUT = "AWAITING_INPUT"
# [/DEF:TaskStatus:Enum] # [/DEF:TaskStatus:Enum]
# [DEF:LogLevel:Enum]
# @SEMANTICS: log, level, severity, enum
# @PURPOSE: Defines the possible log levels for task logging.
# @TIER: STANDARD
class LogLevel(str, Enum):
DEBUG = "DEBUG"
INFO = "INFO"
WARNING = "WARNING"
ERROR = "ERROR"
# [/DEF:LogLevel:Enum]
# [DEF:LogEntry:Class] # [DEF:LogEntry:Class]
# @SEMANTICS: log, entry, record, pydantic # @SEMANTICS: log, entry, record, pydantic
# @PURPOSE: A Pydantic model representing a single, structured log entry associated with a task. # @PURPOSE: A Pydantic model representing a single, structured log entry associated with a task.
# @TIER: CRITICAL
# @INVARIANT: Each log entry has a unique timestamp and source.
class LogEntry(BaseModel): class LogEntry(BaseModel):
timestamp: datetime = Field(default_factory=datetime.utcnow) timestamp: datetime = Field(default_factory=datetime.utcnow)
level: str level: str = Field(default="INFO")
message: str message: str
context: Optional[Dict[str, Any]] = None source: str = Field(default="system") # Component attribution: plugin, superset_api, git, etc.
context: Optional[Dict[str, Any]] = None # Legacy field, kept for backward compatibility
metadata: Optional[Dict[str, Any]] = None # Structured metadata (e.g., dashboard_id, progress)
# [/DEF:LogEntry:Class] # [/DEF:LogEntry:Class]
# [DEF:TaskLog:Class]
# @SEMANTICS: task, log, persistent, pydantic
# @PURPOSE: A Pydantic model representing a persisted log entry from the database.
# @TIER: STANDARD
# @RELATION: MAPS_TO -> TaskLogRecord
class TaskLog(BaseModel):
id: int
task_id: str
timestamp: datetime
level: str
source: str
message: str
metadata: Optional[Dict[str, Any]] = None
class Config:
from_attributes = True
# [/DEF:TaskLog:Class]
# [DEF:LogFilter:Class]
# @SEMANTICS: log, filter, query, pydantic
# @PURPOSE: Filter parameters for querying task logs.
# @TIER: STANDARD
class LogFilter(BaseModel):
level: Optional[str] = None # Filter by log level
source: Optional[str] = None # Filter by source component
search: Optional[str] = None # Text search in message
offset: int = Field(default=0, ge=0)
limit: int = Field(default=100, ge=1, le=1000)
# [/DEF:LogFilter:Class]
# [DEF:LogStats:Class]
# @SEMANTICS: log, stats, aggregation, pydantic
# @PURPOSE: Statistics about log entries for a task.
# @TIER: STANDARD
class LogStats(BaseModel):
total_count: int
by_level: Dict[str, int] # {"INFO": 10, "ERROR": 2}
by_source: Dict[str, int] # {"plugin": 5, "superset_api": 7}
# [/DEF:LogStats:Class]
# [DEF:Task:Class] # [DEF:Task:Class]
# @TIER: STANDARD
# @SEMANTICS: task, job, execution, state, pydantic # @SEMANTICS: task, job, execution, state, pydantic
# @PURPOSE: A Pydantic model representing a single execution instance of a plugin, including its status, parameters, and logs. # @PURPOSE: A Pydantic model representing a single execution instance of a plugin, including its status, parameters, and logs.
class Task(BaseModel): class Task(BaseModel):

View File

@@ -7,13 +7,13 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from datetime import datetime from datetime import datetime
from typing import List, Optional, Dict, Any from typing import List, Optional
import json import json
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from ...models.task import TaskRecord from ...models.task import TaskRecord, TaskLogRecord
from ..database import TasksSessionLocal from ..database import TasksSessionLocal
from .models import Task, TaskStatus, LogEntry from .models import Task, TaskStatus, LogEntry, TaskLog, LogFilter, LogStats
from ..logger import logger, belief_scope from ..logger import logger, belief_scope
# [/SECTION] # [/SECTION]
@@ -36,6 +36,7 @@ class TaskPersistenceService:
# @PRE: isinstance(task, Task) # @PRE: isinstance(task, Task)
# @POST: Task record created or updated in database. # @POST: Task record created or updated in database.
# @PARAM: task (Task) - The task object to persist. # @PARAM: task (Task) - The task object to persist.
# @SIDE_EFFECT: Writes to task_records table in tasks.db
def persist_task(self, task: Task) -> None: def persist_task(self, task: Task) -> None:
with belief_scope("TaskPersistenceService.persist_task", f"task_id={task.id}"): with belief_scope("TaskPersistenceService.persist_task", f"task_id={task.id}"):
session: Session = TasksSessionLocal() session: Session = TasksSessionLocal()
@@ -50,8 +51,19 @@ class TaskPersistenceService:
record.environment_id = task.params.get("environment_id") or task.params.get("source_env_id") record.environment_id = task.params.get("environment_id") or task.params.get("source_env_id")
record.started_at = task.started_at record.started_at = task.started_at
record.finished_at = task.finished_at record.finished_at = task.finished_at
record.params = task.params
record.result = task.result # Ensure params and result are JSON serializable
def json_serializable(obj):
if isinstance(obj, dict):
return {k: json_serializable(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [json_serializable(v) for v in obj]
elif isinstance(obj, datetime):
return obj.isoformat()
return obj
record.params = json_serializable(task.params)
record.result = json_serializable(task.result)
# Store logs as JSON, converting datetime to string # Store logs as JSON, converting datetime to string
record.logs = [] record.logs = []
@@ -59,6 +71,9 @@ class TaskPersistenceService:
log_dict = log.dict() log_dict = log.dict()
if isinstance(log_dict.get('timestamp'), datetime): if isinstance(log_dict.get('timestamp'), datetime):
log_dict['timestamp'] = log_dict['timestamp'].isoformat() log_dict['timestamp'] = log_dict['timestamp'].isoformat()
# Also clean up any datetimes in context
if log_dict.get('context'):
log_dict['context'] = json_serializable(log_dict['context'])
record.logs.append(log_dict) record.logs.append(log_dict)
# Extract error if failed # Extract error if failed
@@ -155,4 +170,215 @@ class TaskPersistenceService:
# [/DEF:delete_tasks:Function] # [/DEF:delete_tasks:Function]
# [/DEF:TaskPersistenceService:Class] # [/DEF:TaskPersistenceService:Class]
# [DEF:TaskLogPersistenceService:Class]
# @SEMANTICS: persistence, service, database, log, sqlalchemy
# @PURPOSE: Provides methods to save and query task logs from the task_logs table.
# @TIER: CRITICAL
# @RELATION: DEPENDS_ON -> TaskLogRecord
# @INVARIANT: Log entries are batch-inserted for performance.
class TaskLogPersistenceService:
"""
Service for persisting and querying task logs.
Supports batch inserts, filtering, and statistics.
"""
# [DEF:__init__:Function]
# @PURPOSE: Initialize the log persistence service.
# @POST: Service is ready.
def __init__(self):
pass
# [/DEF:__init__:Function]
# [DEF:add_logs:Function]
# @PURPOSE: Batch insert log entries for a task.
# @PRE: logs is a list of LogEntry objects.
# @POST: All logs inserted into task_logs table.
# @PARAM: task_id (str) - The task ID.
# @PARAM: logs (List[LogEntry]) - Log entries to insert.
# @SIDE_EFFECT: Writes to task_logs table.
def add_logs(self, task_id: str, logs: List[LogEntry]) -> None:
if not logs:
return
with belief_scope("TaskLogPersistenceService.add_logs", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
for log in logs:
record = TaskLogRecord(
task_id=task_id,
timestamp=log.timestamp,
level=log.level,
source=log.source or "system",
message=log.message,
metadata_json=json.dumps(log.metadata) if log.metadata else None
)
session.add(record)
session.commit()
except Exception as e:
session.rollback()
logger.error(f"Failed to add logs for task {task_id}: {e}")
finally:
session.close()
# [/DEF:add_logs:Function]
# [DEF:get_logs:Function]
# @PURPOSE: Query logs for a task with filtering and pagination.
# @PRE: task_id is a valid task ID.
# @POST: Returns list of TaskLog objects matching filters.
# @PARAM: task_id (str) - The task ID.
# @PARAM: log_filter (LogFilter) - Filter parameters.
# @RETURN: List[TaskLog] - Filtered log entries.
def get_logs(self, task_id: str, log_filter: LogFilter) -> List[TaskLog]:
with belief_scope("TaskLogPersistenceService.get_logs", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
query = session.query(TaskLogRecord).filter(TaskLogRecord.task_id == task_id)
# Apply filters
if log_filter.level:
query = query.filter(TaskLogRecord.level == log_filter.level.upper())
if log_filter.source:
query = query.filter(TaskLogRecord.source == log_filter.source)
if log_filter.search:
search_pattern = f"%{log_filter.search}%"
query = query.filter(TaskLogRecord.message.ilike(search_pattern))
# Order by timestamp ascending (oldest first)
query = query.order_by(TaskLogRecord.timestamp.asc())
# Apply pagination
records = query.offset(log_filter.offset).limit(log_filter.limit).all()
logs = []
for record in records:
metadata = None
if record.metadata_json:
try:
metadata = json.loads(record.metadata_json)
except json.JSONDecodeError:
metadata = None
logs.append(TaskLog(
id=record.id,
task_id=record.task_id,
timestamp=record.timestamp,
level=record.level,
source=record.source,
message=record.message,
metadata=metadata
))
return logs
finally:
session.close()
# [/DEF:get_logs:Function]
# [DEF:get_log_stats:Function]
# @PURPOSE: Get statistics about logs for a task.
# @PRE: task_id is a valid task ID.
# @POST: Returns LogStats with counts by level and source.
# @PARAM: task_id (str) - The task ID.
# @RETURN: LogStats - Statistics about task logs.
def get_log_stats(self, task_id: str) -> LogStats:
with belief_scope("TaskLogPersistenceService.get_log_stats", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
# Get total count
total_count = session.query(TaskLogRecord).filter(
TaskLogRecord.task_id == task_id
).count()
# Get counts by level
from sqlalchemy import func
level_counts = session.query(
TaskLogRecord.level,
func.count(TaskLogRecord.id)
).filter(
TaskLogRecord.task_id == task_id
).group_by(TaskLogRecord.level).all()
by_level = {level: count for level, count in level_counts}
# Get counts by source
source_counts = session.query(
TaskLogRecord.source,
func.count(TaskLogRecord.id)
).filter(
TaskLogRecord.task_id == task_id
).group_by(TaskLogRecord.source).all()
by_source = {source: count for source, count in source_counts}
return LogStats(
total_count=total_count,
by_level=by_level,
by_source=by_source
)
finally:
session.close()
# [/DEF:get_log_stats:Function]
# [DEF:get_sources:Function]
# @PURPOSE: Get unique sources for a task's logs.
# @PRE: task_id is a valid task ID.
# @POST: Returns list of unique source strings.
# @PARAM: task_id (str) - The task ID.
# @RETURN: List[str] - Unique source names.
def get_sources(self, task_id: str) -> List[str]:
with belief_scope("TaskLogPersistenceService.get_sources", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
from sqlalchemy import distinct
sources = session.query(distinct(TaskLogRecord.source)).filter(
TaskLogRecord.task_id == task_id
).all()
return [s[0] for s in sources]
finally:
session.close()
# [/DEF:get_sources:Function]
# [DEF:delete_logs_for_task:Function]
# @PURPOSE: Delete all logs for a specific task.
# @PRE: task_id is a valid task ID.
# @POST: All logs for the task are deleted.
# @PARAM: task_id (str) - The task ID.
# @SIDE_EFFECT: Deletes from task_logs table.
def delete_logs_for_task(self, task_id: str) -> None:
with belief_scope("TaskLogPersistenceService.delete_logs_for_task", f"task_id={task_id}"):
session: Session = TasksSessionLocal()
try:
session.query(TaskLogRecord).filter(
TaskLogRecord.task_id == task_id
).delete(synchronize_session=False)
session.commit()
except Exception as e:
session.rollback()
logger.error(f"Failed to delete logs for task {task_id}: {e}")
finally:
session.close()
# [/DEF:delete_logs_for_task:Function]
# [DEF:delete_logs_for_tasks:Function]
# @PURPOSE: Delete all logs for multiple tasks.
# @PRE: task_ids is a list of task IDs.
# @POST: All logs for the tasks are deleted.
# @PARAM: task_ids (List[str]) - List of task IDs.
def delete_logs_for_tasks(self, task_ids: List[str]) -> None:
if not task_ids:
return
with belief_scope("TaskLogPersistenceService.delete_logs_for_tasks"):
session: Session = TasksSessionLocal()
try:
session.query(TaskLogRecord).filter(
TaskLogRecord.task_id.in_(task_ids)
).delete(synchronize_session=False)
session.commit()
except Exception as e:
session.rollback()
logger.error(f"Failed to delete logs for tasks: {e}")
finally:
session.close()
# [/DEF:delete_logs_for_tasks:Function]
# [/DEF:TaskLogPersistenceService:Class]
# [/DEF:TaskPersistenceModule:Module] # [/DEF:TaskPersistenceModule:Module]

View File

@@ -0,0 +1,167 @@
# [DEF:TaskLoggerModule:Module]
# @SEMANTICS: task, logger, context, plugin, attribution
# @PURPOSE: Provides a dedicated logger for tasks with automatic source attribution.
# @LAYER: Core
# @RELATION: DEPENDS_ON -> TaskManager, CALLS -> TaskManager._add_log
# @TIER: CRITICAL
# @INVARIANT: Each TaskLogger instance is bound to a specific task_id and default source.
# [SECTION: IMPORTS]
from typing import Dict, Any, Optional, Callable
# [/SECTION]
# [DEF:TaskLogger:Class]
# @SEMANTICS: logger, task, source, attribution
# @PURPOSE: A wrapper around TaskManager._add_log that carries task_id and source context.
# @TIER: CRITICAL
# @INVARIANT: All log calls include the task_id and source.
# @UX_STATE: Idle -> Logging -> (system records log)
class TaskLogger:
"""
A dedicated logger for tasks that automatically tags logs with source attribution.
Usage:
logger = TaskLogger(task_id="abc123", add_log_fn=task_manager._add_log, source="plugin")
logger.info("Starting backup process")
logger.error("Failed to connect", metadata={"error_code": 500})
# Create sub-logger with different source
api_logger = logger.with_source("superset_api")
api_logger.info("Fetching dashboards")
"""
# [DEF:__init__:Function]
# @PURPOSE: Initialize the TaskLogger with task context.
# @PRE: add_log_fn is a callable that accepts (task_id, level, message, context, source, metadata).
# @POST: TaskLogger is ready to log messages.
# @PARAM: task_id (str) - The ID of the task.
# @PARAM: add_log_fn (Callable) - Function to add log to TaskManager.
# @PARAM: source (str) - Default source for logs (default: "plugin").
def __init__(
self,
task_id: str,
add_log_fn: Callable,
source: str = "plugin"
):
self._task_id = task_id
self._add_log = add_log_fn
self._default_source = source
# [/DEF:__init__:Function]
# [DEF:with_source:Function]
# @PURPOSE: Create a sub-logger with a different default source.
# @PRE: source is a non-empty string.
# @POST: Returns new TaskLogger with the same task_id but different source.
# @PARAM: source (str) - New default source.
# @RETURN: TaskLogger - New logger instance.
def with_source(self, source: str) -> "TaskLogger":
"""Create a sub-logger with a different source context."""
return TaskLogger(
task_id=self._task_id,
add_log_fn=self._add_log,
source=source
)
# [/DEF:with_source:Function]
# [DEF:_log:Function]
# @PURPOSE: Internal method to log a message at a given level.
# @PRE: level is a valid log level string.
# @POST: Log entry added via add_log_fn.
# @PARAM: level (str) - Log level (DEBUG, INFO, WARNING, ERROR).
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source for this log entry.
# @PARAM: metadata (Optional[Dict]) - Additional structured data.
def _log(
self,
level: str,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
"""Internal logging method."""
self._add_log(
task_id=self._task_id,
level=level,
message=message,
source=source or self._default_source,
metadata=metadata
)
# [/DEF:_log:Function]
# [DEF:debug:Function]
# @PURPOSE: Log a DEBUG level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def debug(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("DEBUG", message, source, metadata)
# [/DEF:debug:Function]
# [DEF:info:Function]
# @PURPOSE: Log an INFO level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def info(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("INFO", message, source, metadata)
# [/DEF:info:Function]
# [DEF:warning:Function]
# @PURPOSE: Log a WARNING level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def warning(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("WARNING", message, source, metadata)
# [/DEF:warning:Function]
# [DEF:error:Function]
# @PURPOSE: Log an ERROR level message.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
def error(
self,
message: str,
source: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> None:
self._log("ERROR", message, source, metadata)
# [/DEF:error:Function]
# [DEF:progress:Function]
# @PURPOSE: Log a progress update with percentage.
# @PRE: percent is between 0 and 100.
# @POST: Log entry with progress metadata added.
# @PARAM: message (str) - Progress message.
# @PARAM: percent (float) - Progress percentage (0-100).
# @PARAM: source (Optional[str]) - Override source.
def progress(
self,
message: str,
percent: float,
source: Optional[str] = None
) -> None:
"""Log a progress update with percentage."""
metadata = {"progress": min(100, max(0, percent))}
self._log("INFO", message, source, metadata)
# [/DEF:progress:Function]
# [/DEF:TaskLogger:Class]
# [/DEF:TaskLoggerModule:Module]

View File

@@ -11,7 +11,7 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
import pandas as pd # type: ignore import pandas as pd # type: ignore
import psycopg2 # type: ignore import psycopg2 # type: ignore
from typing import Dict, List, Optional, Any from typing import Dict, Optional, Any
from ..logger import logger as app_logger, belief_scope from ..logger import logger as app_logger, belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -19,7 +19,6 @@ from datetime import date, datetime
import shutil import shutil
import zlib import zlib
from dataclasses import dataclass from dataclasses import dataclass
import yaml
from ..logger import logger as app_logger, belief_scope from ..logger import logger as app_logger, belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -140,7 +140,16 @@ class APIClient:
app_logger.info("[authenticate][Enter] Authenticating to %s", self.base_url) app_logger.info("[authenticate][Enter] Authenticating to %s", self.base_url)
try: try:
login_url = f"{self.base_url}/security/login" login_url = f"{self.base_url}/security/login"
# Log the payload keys and values (masking password)
masked_auth = {k: ("******" if k == "password" else v) for k, v in self.auth.items()}
app_logger.info(f"[authenticate][Debug] Login URL: {login_url}")
app_logger.info(f"[authenticate][Debug] Auth payload: {masked_auth}")
response = self.session.post(login_url, json=self.auth, timeout=self.request_settings["timeout"]) response = self.session.post(login_url, json=self.auth, timeout=self.request_settings["timeout"])
if response.status_code != 200:
app_logger.error(f"[authenticate][Error] Status: {response.status_code}, Response: {response.text}")
response.raise_for_status() response.raise_for_status()
access_token = response.json()["access_token"] access_token = response.json()["access_token"]
@@ -153,6 +162,9 @@ class APIClient:
app_logger.info("[authenticate][Exit] Authenticated successfully.") app_logger.info("[authenticate][Exit] Authenticated successfully.")
return self._tokens return self._tokens
except requests.exceptions.HTTPError as e: except requests.exceptions.HTTPError as e:
status_code = e.response.status_code if e.response is not None else None
if status_code in [502, 503, 504]:
raise NetworkError(f"Environment unavailable during authentication (Status {status_code})", status_code=status_code) from e
raise AuthenticationError(f"Authentication failed: {e}") from e raise AuthenticationError(f"Authentication failed: {e}") from e
except (requests.exceptions.RequestException, KeyError) as e: except (requests.exceptions.RequestException, KeyError) as e:
raise NetworkError(f"Network or parsing error during authentication: {e}") from e raise NetworkError(f"Network or parsing error during authentication: {e}") from e
@@ -165,7 +177,8 @@ class APIClient:
# @POST: Returns headers including auth tokens. # @POST: Returns headers including auth tokens.
def headers(self) -> Dict[str, str]: def headers(self) -> Dict[str, str]:
with belief_scope("headers"): with belief_scope("headers"):
if not self._authenticated: self.authenticate() if not self._authenticated:
self.authenticate()
return { return {
"Authorization": f"Bearer {self._tokens['access_token']}", "Authorization": f"Bearer {self._tokens['access_token']}",
"X-CSRFToken": self._tokens.get("csrf_token", ""), "X-CSRFToken": self._tokens.get("csrf_token", ""),
@@ -188,7 +201,8 @@ class APIClient:
with belief_scope("request"): with belief_scope("request"):
full_url = f"{self.base_url}{endpoint}" full_url = f"{self.base_url}{endpoint}"
_headers = self.headers.copy() _headers = self.headers.copy()
if headers: _headers.update(headers) if headers:
_headers.update(headers)
try: try:
response = self.session.request(method, full_url, headers=_headers, **kwargs) response = self.session.request(method, full_url, headers=_headers, **kwargs)
@@ -209,9 +223,14 @@ class APIClient:
def _handle_http_error(self, e: requests.exceptions.HTTPError, endpoint: str): def _handle_http_error(self, e: requests.exceptions.HTTPError, endpoint: str):
with belief_scope("_handle_http_error"): with belief_scope("_handle_http_error"):
status_code = e.response.status_code status_code = e.response.status_code
if status_code == 404: raise DashboardNotFoundError(endpoint) from e if status_code == 502 or status_code == 503 or status_code == 504:
if status_code == 403: raise PermissionDeniedError() from e raise NetworkError(f"Environment unavailable (Status {status_code})", status_code=status_code) from e
if status_code == 401: raise AuthenticationError() from e if status_code == 404:
raise DashboardNotFoundError(endpoint) from e
if status_code == 403:
raise PermissionDeniedError() from e
if status_code == 401:
raise AuthenticationError() from e
raise SupersetAPIError(f"API Error {status_code}: {e.response.text}") from e raise SupersetAPIError(f"API Error {status_code}: {e.response.text}") from e
# [/DEF:_handle_http_error:Function] # [/DEF:_handle_http_error:Function]
@@ -223,9 +242,12 @@ class APIClient:
# @POST: Raises a NetworkError. # @POST: Raises a NetworkError.
def _handle_network_error(self, e: requests.exceptions.RequestException, url: str): def _handle_network_error(self, e: requests.exceptions.RequestException, url: str):
with belief_scope("_handle_network_error"): with belief_scope("_handle_network_error"):
if isinstance(e, requests.exceptions.Timeout): msg = "Request timeout" if isinstance(e, requests.exceptions.Timeout):
elif isinstance(e, requests.exceptions.ConnectionError): msg = "Connection error" msg = "Request timeout"
else: msg = f"Unknown network error: {e}" elif isinstance(e, requests.exceptions.ConnectionError):
msg = "Connection error"
else:
msg = f"Unknown network error: {e}"
raise NetworkError(msg, url=url) from e raise NetworkError(msg, url=url) from e
# [/DEF:_handle_network_error:Function] # [/DEF:_handle_network_error:Function]
@@ -242,7 +264,9 @@ class APIClient:
def upload_file(self, endpoint: str, file_info: Dict[str, Any], extra_data: Optional[Dict] = None, timeout: Optional[int] = None) -> Dict: def upload_file(self, endpoint: str, file_info: Dict[str, Any], extra_data: Optional[Dict] = None, timeout: Optional[int] = None) -> Dict:
with belief_scope("upload_file"): with belief_scope("upload_file"):
full_url = f"{self.base_url}{endpoint}" full_url = f"{self.base_url}{endpoint}"
_headers = self.headers.copy(); _headers.pop('Content-Type', None) _headers = self.headers.copy()
_headers.pop('Content-Type', None)
file_obj, file_name, form_field = file_info.get("file_obj"), file_info.get("file_name"), file_info.get("form_field", "file") file_obj, file_name, form_field = file_info.get("file_obj"), file_info.get("file_name"), file_info.get("form_field", "file")

View File

@@ -5,7 +5,6 @@
# @RELATION: Used by the main app and API routers to get access to shared instances. # @RELATION: Used by the main app and API routers to get access to shared instances.
from pathlib import Path from pathlib import Path
from typing import Optional
from fastapi import Depends, HTTPException, status from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer from fastapi.security import OAuth2PasswordBearer
from jose import JWTError from jose import JWTError
@@ -13,8 +12,9 @@ from .core.plugin_loader import PluginLoader
from .core.task_manager import TaskManager from .core.task_manager import TaskManager
from .core.config_manager import ConfigManager from .core.config_manager import ConfigManager
from .core.scheduler import SchedulerService from .core.scheduler import SchedulerService
from .services.resource_service import ResourceService
from .core.database import init_db, get_auth_db from .core.database import init_db, get_auth_db
from .core.logger import logger, belief_scope from .core.logger import logger
from .core.auth.jwt import decode_token from .core.auth.jwt import decode_token
from .core.auth.repository import AuthRepository from .core.auth.repository import AuthRepository
from .models.auth import User from .models.auth import User
@@ -35,8 +35,7 @@ init_db()
# @RETURN: ConfigManager - The shared config manager instance. # @RETURN: ConfigManager - The shared config manager instance.
def get_config_manager() -> ConfigManager: def get_config_manager() -> ConfigManager:
"""Dependency injector for the ConfigManager.""" """Dependency injector for the ConfigManager."""
with belief_scope("get_config_manager"): return config_manager
return config_manager
# [/DEF:get_config_manager:Function] # [/DEF:get_config_manager:Function]
plugin_dir = Path(__file__).parent / "plugins" plugin_dir = Path(__file__).parent / "plugins"
@@ -51,6 +50,9 @@ logger.info("TaskManager initialized")
scheduler_service = SchedulerService(task_manager, config_manager) scheduler_service = SchedulerService(task_manager, config_manager)
logger.info("SchedulerService initialized") logger.info("SchedulerService initialized")
resource_service = ResourceService()
logger.info("ResourceService initialized")
# [DEF:get_plugin_loader:Function] # [DEF:get_plugin_loader:Function]
# @PURPOSE: Dependency injector for the PluginLoader. # @PURPOSE: Dependency injector for the PluginLoader.
# @PRE: Global plugin_loader must be initialized. # @PRE: Global plugin_loader must be initialized.
@@ -58,8 +60,7 @@ logger.info("SchedulerService initialized")
# @RETURN: PluginLoader - The shared plugin loader instance. # @RETURN: PluginLoader - The shared plugin loader instance.
def get_plugin_loader() -> PluginLoader: def get_plugin_loader() -> PluginLoader:
"""Dependency injector for the PluginLoader.""" """Dependency injector for the PluginLoader."""
with belief_scope("get_plugin_loader"): return plugin_loader
return plugin_loader
# [/DEF:get_plugin_loader:Function] # [/DEF:get_plugin_loader:Function]
# [DEF:get_task_manager:Function] # [DEF:get_task_manager:Function]
@@ -69,8 +70,7 @@ def get_plugin_loader() -> PluginLoader:
# @RETURN: TaskManager - The shared task manager instance. # @RETURN: TaskManager - The shared task manager instance.
def get_task_manager() -> TaskManager: def get_task_manager() -> TaskManager:
"""Dependency injector for the TaskManager.""" """Dependency injector for the TaskManager."""
with belief_scope("get_task_manager"): return task_manager
return task_manager
# [/DEF:get_task_manager:Function] # [/DEF:get_task_manager:Function]
# [DEF:get_scheduler_service:Function] # [DEF:get_scheduler_service:Function]
@@ -80,10 +80,19 @@ def get_task_manager() -> TaskManager:
# @RETURN: SchedulerService - The shared scheduler service instance. # @RETURN: SchedulerService - The shared scheduler service instance.
def get_scheduler_service() -> SchedulerService: def get_scheduler_service() -> SchedulerService:
"""Dependency injector for the SchedulerService.""" """Dependency injector for the SchedulerService."""
with belief_scope("get_scheduler_service"): return scheduler_service
return scheduler_service
# [/DEF:get_scheduler_service:Function] # [/DEF:get_scheduler_service:Function]
# [DEF:get_resource_service:Function]
# @PURPOSE: Dependency injector for the ResourceService.
# @PRE: Global resource_service must be initialized.
# @POST: Returns shared ResourceService instance.
# @RETURN: ResourceService - The shared resource service instance.
def get_resource_service() -> ResourceService:
"""Dependency injector for the ResourceService."""
return resource_service
# [/DEF:get_resource_service:Function]
# [DEF:oauth2_scheme:Variable] # [DEF:oauth2_scheme:Variable]
# @PURPOSE: OAuth2 password bearer scheme for token extraction. # @PURPOSE: OAuth2 password bearer scheme for token extraction.
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/auth/login") oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/auth/login")
@@ -98,25 +107,24 @@ oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/auth/login")
# @PARAM: db (Session) - Auth database session. # @PARAM: db (Session) - Auth database session.
# @RETURN: User - The authenticated user. # @RETURN: User - The authenticated user.
def get_current_user(token: str = Depends(oauth2_scheme), db = Depends(get_auth_db)): def get_current_user(token: str = Depends(oauth2_scheme), db = Depends(get_auth_db)):
with belief_scope("get_current_user"): credentials_exception = HTTPException(
credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED,
status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials",
detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"},
headers={"WWW-Authenticate": "Bearer"}, )
) try:
try: payload = decode_token(token)
payload = decode_token(token) username: str = payload.get("sub")
username: str = payload.get("sub") if username is None:
if username is None:
raise credentials_exception
except JWTError:
raise credentials_exception raise credentials_exception
except JWTError:
repo = AuthRepository(db) raise credentials_exception
user = repo.get_user_by_username(username)
if user is None: repo = AuthRepository(db)
raise credentials_exception user = repo.get_user_by_username(username)
return user if user is None:
raise credentials_exception
return user
# [/DEF:get_current_user:Function] # [/DEF:get_current_user:Function]
# [DEF:has_permission:Function] # [DEF:has_permission:Function]
@@ -129,24 +137,23 @@ def get_current_user(token: str = Depends(oauth2_scheme), db = Depends(get_auth_
# @RETURN: User - The authenticated user if permission granted. # @RETURN: User - The authenticated user if permission granted.
def has_permission(resource: str, action: str): def has_permission(resource: str, action: str):
def permission_checker(current_user: User = Depends(get_current_user)): def permission_checker(current_user: User = Depends(get_current_user)):
with belief_scope("has_permission", f"{resource}:{action}"): # Union of all permissions across all roles
# Union of all permissions across all roles for role in current_user.roles:
for role in current_user.roles: for perm in role.permissions:
for perm in role.permissions: if perm.resource == resource and perm.action == action:
if perm.resource == resource and perm.action == action: return current_user
return current_user
# Special case for Admin role (full access)
if any(role.name == "Admin" for role in current_user.roles):
return current_user
from .core.auth.logger import log_security_event
log_security_event("PERMISSION_DENIED", current_user.username, {"resource": resource, "action": action})
# Special case for Admin role (full access) raise HTTPException(
if any(role.name == "Admin" for role in current_user.roles): status_code=status.HTTP_403_FORBIDDEN,
return current_user detail=f"Permission denied for {resource}:{action}"
)
from .core.auth.logger import log_security_event
log_security_event("PERMISSION_DENIED", current_user.username, {"resource": resource, "action": action})
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=f"Permission denied for {resource}:{action}"
)
return permission_checker return permission_checker
# [/DEF:has_permission:Function] # [/DEF:has_permission:Function]

View File

@@ -11,7 +11,7 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
import uuid import uuid
from datetime import datetime from datetime import datetime
from sqlalchemy import Column, String, Boolean, DateTime, ForeignKey, Table, Enum from sqlalchemy import Column, String, Boolean, DateTime, ForeignKey, Table
from sqlalchemy.orm import relationship from sqlalchemy.orm import relationship
from .mapping import Base from .mapping import Base
# [/SECTION] # [/SECTION]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.models.connection:Module] # [DEF:backend.src.models.connection:Module]
# #
# @TIER: TRIVIAL
# @SEMANTICS: database, connection, configuration, sqlalchemy, sqlite # @SEMANTICS: database, connection, configuration, sqlalchemy, sqlite
# @PURPOSE: Defines the database schema for external database connection configurations. # @PURPOSE: Defines the database schema for external database connection configurations.
# @LAYER: Domain # @LAYER: Domain
@@ -15,6 +16,7 @@ import uuid
# [/SECTION] # [/SECTION]
# [DEF:ConnectionConfig:Class] # [DEF:ConnectionConfig:Class]
# @TIER: TRIVIAL
# @PURPOSE: Stores credentials for external databases used for column mapping. # @PURPOSE: Stores credentials for external databases used for column mapping.
class ConnectionConfig(Base): class ConnectionConfig(Base):
__tablename__ = "connection_configs" __tablename__ = "connection_configs"

View File

@@ -1,4 +1,5 @@
# [DEF:GitModels:Module] # [DEF:GitModels:Module]
# @TIER: TRIVIAL
# @SEMANTICS: git, models, sqlalchemy, database, schema # @SEMANTICS: git, models, sqlalchemy, database, schema
# @PURPOSE: Git-specific SQLAlchemy models for configuration and repository tracking. # @PURPOSE: Git-specific SQLAlchemy models for configuration and repository tracking.
# @LAYER: Model # @LAYER: Model
@@ -7,7 +8,6 @@
import enum import enum
from datetime import datetime from datetime import datetime
from sqlalchemy import Column, String, Integer, DateTime, Enum, ForeignKey, Boolean from sqlalchemy import Column, String, Integer, DateTime, Enum, ForeignKey, Boolean
from sqlalchemy.dialects.postgresql import UUID
import uuid import uuid
from src.core.database import Base from src.core.database import Base
@@ -26,11 +26,10 @@ class SyncStatus(str, enum.Enum):
DIRTY = "DIRTY" DIRTY = "DIRTY"
CONFLICT = "CONFLICT" CONFLICT = "CONFLICT"
# [DEF:GitServerConfig:Class]
# @TIER: TRIVIAL
# @PURPOSE: Configuration for a Git server connection.
class GitServerConfig(Base): class GitServerConfig(Base):
"""
[DEF:GitServerConfig:Class]
Configuration for a Git server connection.
"""
__tablename__ = "git_server_configs" __tablename__ = "git_server_configs"
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4())) id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
@@ -41,12 +40,12 @@ class GitServerConfig(Base):
default_repository = Column(String(255), nullable=True) default_repository = Column(String(255), nullable=True)
status = Column(Enum(GitStatus), default=GitStatus.UNKNOWN) status = Column(Enum(GitStatus), default=GitStatus.UNKNOWN)
last_validated = Column(DateTime, default=datetime.utcnow) last_validated = Column(DateTime, default=datetime.utcnow)
# [/DEF:GitServerConfig:Class]
# [DEF:GitRepository:Class]
# @TIER: TRIVIAL
# @PURPOSE: Tracking for a local Git repository linked to a dashboard.
class GitRepository(Base): class GitRepository(Base):
"""
[DEF:GitRepository:Class]
Tracking for a local Git repository linked to a dashboard.
"""
__tablename__ = "git_repositories" __tablename__ = "git_repositories"
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4())) id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
@@ -56,12 +55,12 @@ class GitRepository(Base):
local_path = Column(String(255), nullable=False) local_path = Column(String(255), nullable=False)
current_branch = Column(String(255), default="main") current_branch = Column(String(255), default="main")
sync_status = Column(Enum(SyncStatus), default=SyncStatus.CLEAN) sync_status = Column(Enum(SyncStatus), default=SyncStatus.CLEAN)
# [/DEF:GitRepository:Class]
# [DEF:DeploymentEnvironment:Class]
# @TIER: TRIVIAL
# @PURPOSE: Target Superset environments for dashboard deployment.
class DeploymentEnvironment(Base): class DeploymentEnvironment(Base):
"""
[DEF:DeploymentEnvironment:Class]
Target Superset environments for dashboard deployment.
"""
__tablename__ = "deployment_environments" __tablename__ = "deployment_environments"
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4())) id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
@@ -69,5 +68,6 @@ class DeploymentEnvironment(Base):
superset_url = Column(String(255), nullable=False) superset_url = Column(String(255), nullable=False)
superset_token = Column(String(255), nullable=False) superset_token = Column(String(255), nullable=False)
is_active = Column(Boolean, default=True) is_active = Column(Boolean, default=True)
# [/DEF:DeploymentEnvironment:Class]
# [/DEF:GitModels:Module] # [/DEF:GitModels:Module]

46
backend/src/models/llm.py Normal file
View File

@@ -0,0 +1,46 @@
# [DEF:backend.src.models.llm:Module]
# @TIER: STANDARD
# @SEMANTICS: llm, models, sqlalchemy, persistence
# @PURPOSE: SQLAlchemy models for LLM provider configuration and validation results.
# @LAYER: Domain
# @RELATION: INHERITS_FROM -> backend.src.models.mapping.Base
from sqlalchemy import Column, String, Boolean, DateTime, JSON, Text
from datetime import datetime
import uuid
from .mapping import Base
def generate_uuid():
return str(uuid.uuid4())
# [DEF:LLMProvider:Class]
# @PURPOSE: SQLAlchemy model for LLM provider configuration.
class LLMProvider(Base):
__tablename__ = "llm_providers"
id = Column(String, primary_key=True, default=generate_uuid)
provider_type = Column(String, nullable=False) # openai, openrouter, kilo
name = Column(String, nullable=False)
base_url = Column(String, nullable=False)
api_key = Column(String, nullable=False) # Should be encrypted
default_model = Column(String, nullable=False)
is_active = Column(Boolean, default=True)
created_at = Column(DateTime, default=datetime.utcnow)
# [/DEF:LLMProvider:Class]
# [DEF:ValidationRecord:Class]
# @PURPOSE: SQLAlchemy model for dashboard validation history.
class ValidationRecord(Base):
__tablename__ = "llm_validation_results"
id = Column(String, primary_key=True, default=generate_uuid)
dashboard_id = Column(String, nullable=False, index=True)
timestamp = Column(DateTime, default=datetime.utcnow)
status = Column(String, nullable=False) # PASS, WARN, FAIL
screenshot_path = Column(String, nullable=True)
issues = Column(JSON, nullable=False)
summary = Column(Text, nullable=False)
raw_response = Column(Text, nullable=True)
# [/DEF:ValidationRecord:Class]
# [/DEF:backend.src.models.llm:Module]

View File

@@ -59,6 +59,7 @@ class DatabaseMapping(Base):
# [/DEF:DatabaseMapping:Class] # [/DEF:DatabaseMapping:Class]
# [DEF:MigrationJob:Class] # [DEF:MigrationJob:Class]
# @TIER: TRIVIAL
# @PURPOSE: Represents a single migration execution job. # @PURPOSE: Represents a single migration execution job.
class MigrationJob(Base): class MigrationJob(Base):
__tablename__ = "migration_jobs" __tablename__ = "migration_jobs"

View File

@@ -1,9 +1,16 @@
# [DEF:backend.src.models.storage:Module]
# @TIER: TRIVIAL
# @SEMANTICS: storage, file, model, pydantic
# @PURPOSE: Data models for the storage system.
# @LAYER: Domain
from datetime import datetime from datetime import datetime
from enum import Enum from enum import Enum
from typing import Optional from typing import Optional
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
# [DEF:FileCategory:Class] # [DEF:FileCategory:Class]
# @TIER: TRIVIAL
# @PURPOSE: Enumeration of supported file categories in the storage system. # @PURPOSE: Enumeration of supported file categories in the storage system.
class FileCategory(str, Enum): class FileCategory(str, Enum):
BACKUP = "backups" BACKUP = "backups"
@@ -11,6 +18,7 @@ class FileCategory(str, Enum):
# [/DEF:FileCategory:Class] # [/DEF:FileCategory:Class]
# [DEF:StorageConfig:Class] # [DEF:StorageConfig:Class]
# @TIER: TRIVIAL
# @PURPOSE: Configuration model for the storage system, defining paths and naming patterns. # @PURPOSE: Configuration model for the storage system, defining paths and naming patterns.
class StorageConfig(BaseModel): class StorageConfig(BaseModel):
root_path: str = Field(default="backups", description="Absolute path to the storage root directory.") root_path: str = Field(default="backups", description="Absolute path to the storage root directory.")
@@ -20,6 +28,7 @@ class StorageConfig(BaseModel):
# [/DEF:StorageConfig:Class] # [/DEF:StorageConfig:Class]
# [DEF:StoredFile:Class] # [DEF:StoredFile:Class]
# @TIER: TRIVIAL
# @PURPOSE: Data model representing metadata for a file stored in the system. # @PURPOSE: Data model representing metadata for a file stored in the system.
class StoredFile(BaseModel): class StoredFile(BaseModel):
name: str = Field(..., description="Name of the file (including extension).") name: str = Field(..., description="Name of the file (including extension).")
@@ -28,4 +37,6 @@ class StoredFile(BaseModel):
created_at: datetime = Field(..., description="Creation timestamp.") created_at: datetime = Field(..., description="Creation timestamp.")
category: FileCategory = Field(..., description="Category of the file.") category: FileCategory = Field(..., description="Category of the file.")
mime_type: Optional[str] = Field(None, description="MIME type of the file.") mime_type: Optional[str] = Field(None, description="MIME type of the file.")
# [/DEF:StoredFile:Class] # [/DEF:StoredFile:Class]
# [/DEF:backend.src.models.storage:Module]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.models.task:Module] # [DEF:backend.src.models.task:Module]
# #
# @TIER: TRIVIAL
# @SEMANTICS: database, task, record, sqlalchemy, sqlite # @SEMANTICS: database, task, record, sqlalchemy, sqlite
# @PURPOSE: Defines the database schema for task execution records. # @PURPOSE: Defines the database schema for task execution records.
# @LAYER: Domain # @LAYER: Domain
@@ -8,13 +9,14 @@
# @INVARIANT: All primary keys are UUID strings. # @INVARIANT: All primary keys are UUID strings.
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from sqlalchemy import Column, String, DateTime, JSON, ForeignKey from sqlalchemy import Column, String, DateTime, JSON, ForeignKey, Text, Integer, Index
from sqlalchemy.sql import func from sqlalchemy.sql import func
from .mapping import Base from .mapping import Base
import uuid import uuid
# [/SECTION] # [/SECTION]
# [DEF:TaskRecord:Class] # [DEF:TaskRecord:Class]
# @TIER: TRIVIAL
# @PURPOSE: Represents a persistent record of a task execution. # @PURPOSE: Represents a persistent record of a task execution.
class TaskRecord(Base): class TaskRecord(Base):
__tablename__ = "task_records" __tablename__ = "task_records"
@@ -25,11 +27,35 @@ class TaskRecord(Base):
environment_id = Column(String, ForeignKey("environments.id"), nullable=True) environment_id = Column(String, ForeignKey("environments.id"), nullable=True)
started_at = Column(DateTime(timezone=True), nullable=True) started_at = Column(DateTime(timezone=True), nullable=True)
finished_at = Column(DateTime(timezone=True), nullable=True) finished_at = Column(DateTime(timezone=True), nullable=True)
logs = Column(JSON, nullable=True) # Store structured logs as JSON logs = Column(JSON, nullable=True) # Store structured logs as JSON (legacy, kept for backward compatibility)
error = Column(String, nullable=True) error = Column(String, nullable=True)
result = Column(JSON, nullable=True) result = Column(JSON, nullable=True)
created_at = Column(DateTime(timezone=True), server_default=func.now()) created_at = Column(DateTime(timezone=True), server_default=func.now())
params = Column(JSON, nullable=True) params = Column(JSON, nullable=True)
# [/DEF:TaskRecord:Class] # [/DEF:TaskRecord:Class]
# [DEF:TaskLogRecord:Class]
# @PURPOSE: Represents a single persistent log entry for a task.
# @TIER: CRITICAL
# @RELATION: DEPENDS_ON -> TaskRecord
# @INVARIANT: Each log entry belongs to exactly one task.
class TaskLogRecord(Base):
__tablename__ = "task_logs"
id = Column(Integer, primary_key=True, autoincrement=True)
task_id = Column(String, ForeignKey("task_records.id", ondelete="CASCADE"), nullable=False, index=True)
timestamp = Column(DateTime(timezone=True), nullable=False, index=True)
level = Column(String(16), nullable=False) # INFO, WARNING, ERROR, DEBUG
source = Column(String(64), nullable=False, default="system") # plugin, superset_api, git, etc.
message = Column(Text, nullable=False)
metadata_json = Column(Text, nullable=True) # JSON string for additional metadata
# Composite indexes for efficient filtering
__table_args__ = (
Index('ix_task_logs_task_timestamp', 'task_id', 'timestamp'),
Index('ix_task_logs_task_level', 'task_id', 'level'),
Index('ix_task_logs_task_source', 'task_id', 'source'),
)
# [/DEF:TaskLogRecord:Class]
# [/DEF:backend.src.models.task:Module] # [/DEF:backend.src.models.task:Module]

View File

@@ -5,13 +5,14 @@
# @RELATION: IMPLEMENTS -> PluginBase # @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> superset_tool.client # @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils # @RELATION: DEPENDS_ON -> superset_tool.utils
# @RELATION: USES -> TaskContext
from typing import Dict, Any from typing import Dict, Any, Optional
from pathlib import Path from pathlib import Path
from requests.exceptions import RequestException from requests.exceptions import RequestException
from ..core.plugin_base import PluginBase from ..core.plugin_base import PluginBase
from ..core.logger import belief_scope from ..core.logger import belief_scope, logger as app_logger
from ..core.superset_client import SupersetClient from ..core.superset_client import SupersetClient
from ..core.utils.network import SupersetAPIError from ..core.utils.network import SupersetAPIError
from ..core.utils.fileio import ( from ..core.utils.fileio import (
@@ -23,6 +24,7 @@ from ..core.utils.fileio import (
RetentionPolicy RetentionPolicy
) )
from ..dependencies import get_config_manager from ..dependencies import get_config_manager
from ..core.task_manager.context import TaskContext
# [DEF:BackupPlugin:Class] # [DEF:BackupPlugin:Class]
# @PURPOSE: Implementation of the backup plugin logic. # @PURPOSE: Implementation of the backup plugin logic.
@@ -93,7 +95,7 @@ class BackupPlugin(PluginBase):
with belief_scope("get_schema"): with belief_scope("get_schema"):
config_manager = get_config_manager() config_manager = get_config_manager()
envs = [e.name for e in config_manager.get_environments()] envs = [e.name for e in config_manager.get_environments()]
default_path = config_manager.get_config().settings.storage.root_path config_manager.get_config().settings.storage.root_path
return { return {
"type": "object", "type": "object",
@@ -110,11 +112,12 @@ class BackupPlugin(PluginBase):
# [/DEF:get_schema:Function] # [/DEF:get_schema:Function]
# [DEF:execute:Function] # [DEF:execute:Function]
# @PURPOSE: Executes the dashboard backup logic. # @PURPOSE: Executes the dashboard backup logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Backup parameters (env, backup_path). # @PARAM: params (Dict[str, Any]) - Backup parameters (env, backup_path).
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Target environment must be configured. params must be a dictionary. # @PRE: Target environment must be configured. params must be a dictionary.
# @POST: All dashboards are exported and archived. # @POST: All dashboards are exported and archived.
async def execute(self, params: Dict[str, Any]): async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("execute"): with belief_scope("execute"):
config_manager = get_config_manager() config_manager = get_config_manager()
env_id = params.get("environment_id") env_id = params.get("environment_id")
@@ -133,8 +136,14 @@ class BackupPlugin(PluginBase):
# Use 'backups' subfolder within the storage root # Use 'backups' subfolder within the storage root
backup_path = Path(storage_settings.root_path) / "backups" backup_path = Path(storage_settings.root_path) / "backups"
from ..core.logger import logger as app_logger # Use TaskContext logger if available, otherwise fall back to app_logger
app_logger.info(f"[BackupPlugin][Entry] Starting backup for {env}.") log = context.logger if context else app_logger
# Create sub-loggers for different components
superset_log = log.with_source("superset_api") if context else log
storage_log = log.with_source("storage") if context else log
log.info(f"Starting backup for environment: {env}")
try: try:
config_manager = get_config_manager() config_manager = get_config_manager()
@@ -148,24 +157,30 @@ class BackupPlugin(PluginBase):
client = SupersetClient(env_config) client = SupersetClient(env_config)
dashboard_count, dashboard_meta = client.get_dashboards() dashboard_count, dashboard_meta = client.get_dashboards()
app_logger.info(f"[BackupPlugin][Progress] Found {dashboard_count} dashboards to export in {env}.") superset_log.info(f"Found {dashboard_count} dashboards to export")
if dashboard_count == 0: if dashboard_count == 0:
app_logger.info("[BackupPlugin][Exit] No dashboards to back up.") log.info("No dashboards to back up")
return return
for db in dashboard_meta: total = len(dashboard_meta)
for idx, db in enumerate(dashboard_meta, 1):
dashboard_id = db.get('id') dashboard_id = db.get('id')
dashboard_title = db.get('dashboard_title', 'Unknown Dashboard') dashboard_title = db.get('dashboard_title', 'Unknown Dashboard')
if not dashboard_id: if not dashboard_id:
continue continue
# Report progress
progress_pct = (idx / total) * 100
log.progress(f"Backing up dashboard: {dashboard_title}", percent=progress_pct)
try: try:
dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}") dashboard_base_dir_name = sanitize_filename(f"{dashboard_title}")
dashboard_dir = backup_path / env.upper() / dashboard_base_dir_name dashboard_dir = backup_path / env.upper() / dashboard_base_dir_name
dashboard_dir.mkdir(parents=True, exist_ok=True) dashboard_dir.mkdir(parents=True, exist_ok=True)
zip_content, filename = client.export_dashboard(dashboard_id) zip_content, filename = client.export_dashboard(dashboard_id)
superset_log.debug(f"Exported dashboard: {dashboard_title}")
save_and_unpack_dashboard( save_and_unpack_dashboard(
zip_content=zip_content, zip_content=zip_content,
@@ -175,18 +190,19 @@ class BackupPlugin(PluginBase):
) )
archive_exports(str(dashboard_dir), policy=RetentionPolicy()) archive_exports(str(dashboard_dir), policy=RetentionPolicy())
storage_log.debug(f"Archived dashboard: {dashboard_title}")
except (SupersetAPIError, RequestException, IOError, OSError) as db_error: except (SupersetAPIError, RequestException, IOError, OSError) as db_error:
app_logger.error(f"[BackupPlugin][Failure] Failed to export dashboard {dashboard_title} (ID: {dashboard_id}): {db_error}", exc_info=True) log.error(f"Failed to export dashboard {dashboard_title} (ID: {dashboard_id}): {db_error}")
continue continue
consolidate_archive_folders(backup_path / env.upper()) consolidate_archive_folders(backup_path / env.upper())
remove_empty_directories(str(backup_path / env.upper())) remove_empty_directories(str(backup_path / env.upper()))
app_logger.info(f"[BackupPlugin][CoherenceCheck:Passed] Backup logic completed for {env}.") log.info(f"Backup completed successfully for {env}")
except (RequestException, IOError, KeyError) as e: except (RequestException, IOError, KeyError) as e:
app_logger.critical(f"[BackupPlugin][Failure] Fatal error during backup for {env}: {e}", exc_info=True) log.error(f"Fatal error during backup for {env}: {e}")
raise e raise e
# [/DEF:execute:Function] # [/DEF:execute:Function]
# [/DEF:BackupPlugin:Class] # [/DEF:BackupPlugin:Class]

View File

@@ -3,6 +3,7 @@
# @PURPOSE: Implements a plugin for system diagnostics and debugging Superset API responses. # @PURPOSE: Implements a plugin for system diagnostics and debugging Superset API responses.
# @LAYER: Plugins # @LAYER: Plugins
# @RELATION: Inherits from PluginBase. Uses SupersetClient from core. # @RELATION: Inherits from PluginBase. Uses SupersetClient from core.
# @RELATION: USES -> TaskContext
# @CONSTRAINT: Must use belief_scope for logging. # @CONSTRAINT: Must use belief_scope for logging.
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
@@ -10,6 +11,7 @@ from typing import Dict, Any, Optional
from ..core.plugin_base import PluginBase from ..core.plugin_base import PluginBase
from ..core.superset_client import SupersetClient from ..core.superset_client import SupersetClient
from ..core.logger import logger, belief_scope from ..core.logger import logger, belief_scope
from ..core.task_manager.context import TaskContext
# [/SECTION] # [/SECTION]
# [DEF:DebugPlugin:Class] # [DEF:DebugPlugin:Class]
@@ -114,20 +116,29 @@ class DebugPlugin(PluginBase):
# [/DEF:get_schema:Function] # [/DEF:get_schema:Function]
# [DEF:execute:Function] # [DEF:execute:Function]
# @PURPOSE: Executes the debug logic. # @PURPOSE: Executes the debug logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Debug parameters. # @PARAM: params (Dict[str, Any]) - Debug parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: action must be provided in params. # @PRE: action must be provided in params.
# @POST: Debug action is executed and results returned. # @POST: Debug action is executed and results returned.
# @RETURN: Dict[str, Any] - Execution results. # @RETURN: Dict[str, Any] - Execution results.
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]: async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("execute"): with belief_scope("execute"):
action = params.get("action") action = params.get("action")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
debug_log = log.with_source("debug") if context else log
superset_log = log.with_source("superset_api") if context else log
debug_log.info(f"Executing debug action: {action}")
if action == "test-db-api": if action == "test-db-api":
return await self._test_db_api(params) return await self._test_db_api(params, superset_log)
elif action == "get-dataset-structure": elif action == "get-dataset-structure":
return await self._get_dataset_structure(params) return await self._get_dataset_structure(params, superset_log)
else: else:
debug_log.error(f"Unknown action: {action}")
raise ValueError(f"Unknown action: {action}") raise ValueError(f"Unknown action: {action}")
# [/DEF:execute:Function] # [/DEF:execute:Function]
@@ -136,33 +147,37 @@ class DebugPlugin(PluginBase):
# @PRE: source_env and target_env params exist in params. # @PRE: source_env and target_env params exist in params.
# @POST: Returns DB counts for both envs. # @POST: Returns DB counts for both envs.
# @PARAM: params (Dict) - Plugin parameters. # @PARAM: params (Dict) - Plugin parameters.
# @PARAM: log - Logger instance for superset_api source.
# @RETURN: Dict - Comparison results. # @RETURN: Dict - Comparison results.
async def _test_db_api(self, params: Dict[str, Any]) -> Dict[str, Any]: async def _test_db_api(self, params: Dict[str, Any], log) -> Dict[str, Any]:
with belief_scope("_test_db_api"): with belief_scope("_test_db_api"):
source_env_name = params.get("source_env") source_env_name = params.get("source_env")
target_env_name = params.get("target_env") target_env_name = params.get("target_env")
if not source_env_name or not target_env_name: if not source_env_name or not target_env_name:
raise ValueError("source_env and target_env are required for test-db-api") raise ValueError("source_env and target_env are required for test-db-api")
from ..dependencies import get_config_manager from ..dependencies import get_config_manager
config_manager = get_config_manager() config_manager = get_config_manager()
results = {} results = {}
for name in [source_env_name, target_env_name]: for name in [source_env_name, target_env_name]:
env_config = config_manager.get_environment(name) log.info(f"Testing database API for environment: {name}")
if not env_config: env_config = config_manager.get_environment(name)
raise ValueError(f"Environment '{name}' not found.") if not env_config:
log.error(f"Environment '{name}' not found.")
raise ValueError(f"Environment '{name}' not found.")
client = SupersetClient(env_config) client = SupersetClient(env_config)
client.authenticate() client.authenticate()
count, dbs = client.get_databases() count, dbs = client.get_databases()
results[name] = { log.debug(f"Found {count} databases in {name}")
"count": count, results[name] = {
"databases": dbs "count": count,
} "databases": dbs
}
return results return results
# [/DEF:_test_db_api:Function] # [/DEF:_test_db_api:Function]
# [DEF:_get_dataset_structure:Function] # [DEF:_get_dataset_structure:Function]
@@ -170,26 +185,31 @@ class DebugPlugin(PluginBase):
# @PRE: env and dataset_id params exist in params. # @PRE: env and dataset_id params exist in params.
# @POST: Returns dataset JSON structure. # @POST: Returns dataset JSON structure.
# @PARAM: params (Dict) - Plugin parameters. # @PARAM: params (Dict) - Plugin parameters.
# @PARAM: log - Logger instance for superset_api source.
# @RETURN: Dict - Dataset structure. # @RETURN: Dict - Dataset structure.
async def _get_dataset_structure(self, params: Dict[str, Any]) -> Dict[str, Any]: async def _get_dataset_structure(self, params: Dict[str, Any], log) -> Dict[str, Any]:
with belief_scope("_get_dataset_structure"): with belief_scope("_get_dataset_structure"):
env_name = params.get("env") env_name = params.get("env")
dataset_id = params.get("dataset_id") dataset_id = params.get("dataset_id")
if not env_name or dataset_id is None: if not env_name or dataset_id is None:
raise ValueError("env and dataset_id are required for get-dataset-structure") raise ValueError("env and dataset_id are required for get-dataset-structure")
from ..dependencies import get_config_manager log.info(f"Fetching structure for dataset {dataset_id} in {env_name}")
config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name)
if not env_config:
raise ValueError(f"Environment '{env_name}' not found.")
client = SupersetClient(env_config) from ..dependencies import get_config_manager
client.authenticate() config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name)
if not env_config:
log.error(f"Environment '{env_name}' not found.")
raise ValueError(f"Environment '{env_name}' not found.")
client = SupersetClient(env_config)
client.authenticate()
dataset_response = client.get_dataset(dataset_id) dataset_response = client.get_dataset(dataset_id)
return dataset_response.get('result') or {} log.debug(f"Retrieved dataset structure for {dataset_id}")
return dataset_response.get('result') or {}
# [/DEF:_get_dataset_structure:Function] # [/DEF:_get_dataset_structure:Function]
# [/DEF:DebugPlugin:Class] # [/DEF:DebugPlugin:Class]

View File

@@ -0,0 +1,66 @@
# [DEF:backend/src/plugins/git/llm_extension:Module]
# @TIER: STANDARD
# @SEMANTICS: git, llm, commit
# @PURPOSE: LLM-based extensions for the Git plugin, specifically for commit message generation.
# @LAYER: Domain
# @RELATION: DEPENDS_ON -> backend.src.plugins.llm_analysis.service.LLMClient
from typing import List
from tenacity import retry, stop_after_attempt, wait_exponential
from ..llm_analysis.service import LLMClient
from ...core.logger import belief_scope, logger
# [DEF:GitLLMExtension:Class]
# @PURPOSE: Provides LLM capabilities to the Git plugin.
class GitLLMExtension:
def __init__(self, client: LLMClient):
self.client = client
# [DEF:suggest_commit_message:Function]
# @PURPOSE: Generates a suggested commit message based on a diff and history.
# @PARAM: diff (str) - The git diff of staged changes.
# @PARAM: history (List[str]) - Recent commit messages for context.
# @RETURN: str - The suggested commit message.
@retry(
stop=stop_after_attempt(2),
wait=wait_exponential(multiplier=1, min=2, max=10),
reraise=True
)
async def suggest_commit_message(self, diff: str, history: List[str]) -> str:
with belief_scope("suggest_commit_message"):
history_text = "\n".join(history)
prompt = f"""
Generate a concise and professional git commit message based on the following diff and recent history.
Use Conventional Commits format (e.g., feat: ..., fix: ..., docs: ...).
Recent History:
{history_text}
Diff:
{diff}
Commit Message:
"""
logger.debug(f"[suggest_commit_message] Calling LLM with model: {self.client.default_model}")
response = await self.client.client.chat.completions.create(
model=self.client.default_model,
messages=[{"role": "user", "content": prompt}],
temperature=0.7
)
logger.debug(f"[suggest_commit_message] LLM Response: {response}")
if not response or not hasattr(response, 'choices') or not response.choices:
error_info = getattr(response, 'error', 'No choices in response')
logger.error(f"[suggest_commit_message] Invalid LLM response. Error info: {error_info}")
# If it's a timeout/provider error, we might want to throw to trigger retry if decorated
# but for now we return a safe fallback to avoid UI crash
return "Update dashboard configurations (LLM generation failed)"
return response.choices[0].message.content.strip()
# [/DEF:suggest_commit_message:Function]
# [/DEF:GitLLMExtension:Class]
# [/DEF:backend/src/plugins/git/llm_extension:Module]

View File

@@ -7,6 +7,7 @@
# @RELATION: USES -> src.services.git_service.GitService # @RELATION: USES -> src.services.git_service.GitService
# @RELATION: USES -> src.core.superset_client.SupersetClient # @RELATION: USES -> src.core.superset_client.SupersetClient
# @RELATION: USES -> src.core.config_manager.ConfigManager # @RELATION: USES -> src.core.config_manager.ConfigManager
# @RELATION: USES -> TaskContext
# #
# @INVARIANT: Все операции с Git должны выполняться через GitService. # @INVARIANT: Все операции с Git должны выполняться через GitService.
# @CONSTRAINT: Плагин работает только с распакованными YAML-экспортами Superset. # @CONSTRAINT: Плагин работает только с распакованными YAML-экспортами Superset.
@@ -20,9 +21,10 @@ from pathlib import Path
from typing import Dict, Any, Optional from typing import Dict, Any, Optional
from src.core.plugin_base import PluginBase from src.core.plugin_base import PluginBase
from src.services.git_service import GitService from src.services.git_service import GitService
from src.core.logger import logger, belief_scope from src.core.logger import logger as app_logger, belief_scope
from src.core.config_manager import ConfigManager from src.core.config_manager import ConfigManager
from src.core.superset_client import SupersetClient from src.core.superset_client import SupersetClient
from src.core.task_manager.context import TaskContext
# [/SECTION] # [/SECTION]
# [DEF:GitPlugin:Class] # [DEF:GitPlugin:Class]
@@ -35,7 +37,7 @@ class GitPlugin(PluginBase):
# @POST: Инициализированы git_service и config_manager. # @POST: Инициализированы git_service и config_manager.
def __init__(self): def __init__(self):
with belief_scope("GitPlugin.__init__"): with belief_scope("GitPlugin.__init__"):
logger.info("[GitPlugin.__init__][Entry] Initializing GitPlugin.") app_logger.info("Initializing GitPlugin.")
self.git_service = GitService() self.git_service = GitService()
# Robust config path resolution: # Robust config path resolution:
@@ -50,13 +52,13 @@ class GitPlugin(PluginBase):
try: try:
from src.dependencies import config_manager from src.dependencies import config_manager
self.config_manager = config_manager self.config_manager = config_manager
logger.info("[GitPlugin.__init__][Exit] GitPlugin initialized using shared config_manager.") app_logger.info("GitPlugin initialized using shared config_manager.")
return return
except: except Exception:
config_path = "config.json" config_path = "config.json"
self.config_manager = ConfigManager(config_path) self.config_manager = ConfigManager(config_path)
logger.info(f"[GitPlugin.__init__][Exit] GitPlugin initialized with {config_path}") app_logger.info(f"GitPlugin initialized with {config_path}")
# [/DEF:__init__:Function] # [/DEF:__init__:Function]
@property @property
@@ -133,36 +135,44 @@ class GitPlugin(PluginBase):
# @POST: Плагин готов к выполнению задач. # @POST: Плагин готов к выполнению задач.
async def initialize(self): async def initialize(self):
with belief_scope("GitPlugin.initialize"): with belief_scope("GitPlugin.initialize"):
logger.info("[GitPlugin.initialize][Action] Initializing Git Integration Plugin logic.") app_logger.info("[GitPlugin.initialize][Action] Initializing Git Integration Plugin logic.")
# [DEF:execute:Function] # [DEF:execute:Function]
# @PURPOSE: Основной метод выполнения задач плагина. # @PURPOSE: Основной метод выполнения задач плагина с поддержкой TaskContext.
# @PRE: task_data содержит 'operation' и 'dashboard_id'. # @PRE: task_data содержит 'operation' и 'dashboard_id'.
# @POST: Возвращает результат выполнения операции. # @POST: Возвращает результат выполнения операции.
# @PARAM: task_data (Dict[str, Any]) - Данные задачи. # @PARAM: task_data (Dict[str, Any]) - Данные задачи.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @RETURN: Dict[str, Any] - Статус и сообщение. # @RETURN: Dict[str, Any] - Статус и сообщение.
# @RELATION: CALLS -> self._handle_sync # @RELATION: CALLS -> self._handle_sync
# @RELATION: CALLS -> self._handle_deploy # @RELATION: CALLS -> self._handle_deploy
async def execute(self, task_data: Dict[str, Any]) -> Dict[str, Any]: async def execute(self, task_data: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("GitPlugin.execute"): with belief_scope("GitPlugin.execute"):
operation = task_data.get("operation") operation = task_data.get("operation")
dashboard_id = task_data.get("dashboard_id") dashboard_id = task_data.get("dashboard_id")
logger.info(f"[GitPlugin.execute][Entry] Executing operation: {operation} for dashboard {dashboard_id}") # Use TaskContext logger if available, otherwise fall back to app_logger
log = context.logger if context else app_logger
# Create sub-loggers for different components
git_log = log.with_source("git") if context else log
superset_log = log.with_source("superset_api") if context else log
log.info(f"Executing operation: {operation} for dashboard {dashboard_id}")
if operation == "sync": if operation == "sync":
source_env_id = task_data.get("source_env_id") source_env_id = task_data.get("source_env_id")
result = await self._handle_sync(dashboard_id, source_env_id) result = await self._handle_sync(dashboard_id, source_env_id, log, git_log, superset_log)
elif operation == "deploy": elif operation == "deploy":
env_id = task_data.get("environment_id") env_id = task_data.get("environment_id")
result = await self._handle_deploy(dashboard_id, env_id) result = await self._handle_deploy(dashboard_id, env_id, log, git_log, superset_log)
elif operation == "history": elif operation == "history":
result = {"status": "success", "message": "History available via API"} result = {"status": "success", "message": "History available via API"}
else: else:
logger.error(f"[GitPlugin.execute][Coherence:Failed] Unknown operation: {operation}") log.error(f"Unknown operation: {operation}")
raise ValueError(f"Unknown operation: {operation}") raise ValueError(f"Unknown operation: {operation}")
logger.info(f"[GitPlugin.execute][Exit] Operation {operation} completed.") log.info(f"Operation {operation} completed.")
return result return result
# [/DEF:execute:Function] # [/DEF:execute:Function]
@@ -176,13 +186,13 @@ class GitPlugin(PluginBase):
# @SIDE_EFFECT: Изменяет файлы в локальной рабочей директории репозитория. # @SIDE_EFFECT: Изменяет файлы в локальной рабочей директории репозитория.
# @RELATION: CALLS -> src.services.git_service.GitService.get_repo # @RELATION: CALLS -> src.services.git_service.GitService.get_repo
# @RELATION: CALLS -> src.core.superset_client.SupersetClient.export_dashboard # @RELATION: CALLS -> src.core.superset_client.SupersetClient.export_dashboard
async def _handle_sync(self, dashboard_id: int, source_env_id: Optional[str] = None) -> Dict[str, str]: async def _handle_sync(self, dashboard_id: int, source_env_id: Optional[str] = None, log=None, git_log=None, superset_log=None) -> Dict[str, str]:
with belief_scope("GitPlugin._handle_sync"): with belief_scope("GitPlugin._handle_sync"):
try: try:
# 1. Получение репозитория # 1. Получение репозитория
repo = self.git_service.get_repo(dashboard_id) repo = self.git_service.get_repo(dashboard_id)
repo_path = Path(repo.working_dir) repo_path = Path(repo.working_dir)
logger.info(f"[_handle_sync][Action] Target repo path: {repo_path}") git_log.info(f"Target repo path: {repo_path}")
# 2. Настройка клиента Superset # 2. Настройка клиента Superset
env = self._get_env(source_env_id) env = self._get_env(source_env_id)
@@ -190,11 +200,11 @@ class GitPlugin(PluginBase):
client.authenticate() client.authenticate()
# 3. Экспорт дашборда # 3. Экспорт дашборда
logger.info(f"[_handle_sync][Action] Exporting dashboard {dashboard_id} from {env.name}") superset_log.info(f"Exporting dashboard {dashboard_id} from {env.name}")
zip_bytes, _ = client.export_dashboard(dashboard_id) zip_bytes, _ = client.export_dashboard(dashboard_id)
# 4. Распаковка с выравниванием структуры (flattening) # 4. Распаковка с выравниванием структуры (flattening)
logger.info(f"[_handle_sync][Action] Unpacking export to {repo_path}") git_log.info(f"Unpacking export to {repo_path}")
# Список папок/файлов, которые мы ожидаем от Superset # Список папок/файлов, которые мы ожидаем от Superset
managed_dirs = ["dashboards", "charts", "datasets", "databases"] managed_dirs = ["dashboards", "charts", "datasets", "databases"]
@@ -218,7 +228,7 @@ class GitPlugin(PluginBase):
raise ValueError("Export ZIP is empty") raise ValueError("Export ZIP is empty")
root_folder = namelist[0].split('/')[0] root_folder = namelist[0].split('/')[0]
logger.info(f"[_handle_sync][Action] Detected root folder in ZIP: {root_folder}") git_log.info(f"Detected root folder in ZIP: {root_folder}")
for member in zf.infolist(): for member in zf.infolist():
if member.filename.startswith(root_folder + "/") and len(member.filename) > len(root_folder) + 1: if member.filename.startswith(root_folder + "/") and len(member.filename) > len(root_folder) + 1:
@@ -236,15 +246,15 @@ class GitPlugin(PluginBase):
# 5. Автоматический staging изменений (не коммит, чтобы юзер мог проверить diff) # 5. Автоматический staging изменений (не коммит, чтобы юзер мог проверить diff)
try: try:
repo.git.add(A=True) repo.git.add(A=True)
logger.info(f"[_handle_sync][Action] Changes staged in git") app_logger.info("[_handle_sync][Action] Changes staged in git")
except Exception as ge: except Exception as ge:
logger.warning(f"[_handle_sync][Action] Failed to stage changes: {ge}") app_logger.warning(f"[_handle_sync][Action] Failed to stage changes: {ge}")
logger.info(f"[_handle_sync][Coherence:OK] Dashboard {dashboard_id} synced successfully.") app_logger.info(f"[_handle_sync][Coherence:OK] Dashboard {dashboard_id} synced successfully.")
return {"status": "success", "message": "Dashboard synced and flattened in local repository"} return {"status": "success", "message": "Dashboard synced and flattened in local repository"}
except Exception as e: except Exception as e:
logger.error(f"[_handle_sync][Coherence:Failed] Sync failed: {e}") app_logger.error(f"[_handle_sync][Coherence:Failed] Sync failed: {e}")
raise raise
# [/DEF:_handle_sync:Function] # [/DEF:_handle_sync:Function]
@@ -254,10 +264,13 @@ class GitPlugin(PluginBase):
# @POST: Дашборд импортирован в целевой Superset. # @POST: Дашборд импортирован в целевой Superset.
# @PARAM: dashboard_id (int) - ID дашборда. # @PARAM: dashboard_id (int) - ID дашборда.
# @PARAM: env_id (str) - ID целевого окружения. # @PARAM: env_id (str) - ID целевого окружения.
# @PARAM: log - Main logger instance.
# @PARAM: git_log - Git-specific logger instance.
# @PARAM: superset_log - Superset API-specific logger instance.
# @RETURN: Dict[str, Any] - Результат деплоя. # @RETURN: Dict[str, Any] - Результат деплоя.
# @SIDE_EFFECT: Создает и удаляет временный ZIP-файл. # @SIDE_EFFECT: Создает и удаляет временный ZIP-файл.
# @RELATION: CALLS -> src.core.superset_client.SupersetClient.import_dashboard # @RELATION: CALLS -> src.core.superset_client.SupersetClient.import_dashboard
async def _handle_deploy(self, dashboard_id: int, env_id: str) -> Dict[str, Any]: async def _handle_deploy(self, dashboard_id: int, env_id: str, log=None, git_log=None, superset_log=None) -> Dict[str, Any]:
with belief_scope("GitPlugin._handle_deploy"): with belief_scope("GitPlugin._handle_deploy"):
try: try:
if not env_id: if not env_id:
@@ -268,7 +281,7 @@ class GitPlugin(PluginBase):
repo_path = Path(repo.working_dir) repo_path = Path(repo.working_dir)
# 2. Упаковка в ZIP # 2. Упаковка в ZIP
logger.info(f"[_handle_deploy][Action] Packing repository {repo_path} for deployment.") git_log.info(f"Packing repository {repo_path} for deployment.")
zip_buffer = io.BytesIO() zip_buffer = io.BytesIO()
# Superset expects a root directory in the ZIP (e.g., dashboard_export_20240101T000000/) # Superset expects a root directory in the ZIP (e.g., dashboard_export_20240101T000000/)
@@ -279,7 +292,8 @@ class GitPlugin(PluginBase):
if ".git" in dirs: if ".git" in dirs:
dirs.remove(".git") dirs.remove(".git")
for file in files: for file in files:
if file == ".git" or file.endswith(".zip"): continue if file == ".git" or file.endswith(".zip"):
continue
file_path = Path(root) / file file_path = Path(root) / file
# Prepend the root directory name to the archive path # Prepend the root directory name to the archive path
arcname = Path(root_dir_name) / file_path.relative_to(repo_path) arcname = Path(root_dir_name) / file_path.relative_to(repo_path)
@@ -297,21 +311,21 @@ class GitPlugin(PluginBase):
# 4. Импорт # 4. Импорт
temp_zip_path = repo_path / f"deploy_{dashboard_id}.zip" temp_zip_path = repo_path / f"deploy_{dashboard_id}.zip"
logger.info(f"[_handle_deploy][Action] Saving temporary zip to {temp_zip_path}") git_log.info(f"Saving temporary zip to {temp_zip_path}")
with open(temp_zip_path, "wb") as f: with open(temp_zip_path, "wb") as f:
f.write(zip_buffer.getvalue()) f.write(zip_buffer.getvalue())
try: try:
logger.info(f"[_handle_deploy][Action] Importing dashboard to {env.name}") app_logger.info(f"[_handle_deploy][Action] Importing dashboard to {env.name}")
result = client.import_dashboard(temp_zip_path) result = client.import_dashboard(temp_zip_path)
logger.info(f"[_handle_deploy][Coherence:OK] Deployment successful for dashboard {dashboard_id}.") app_logger.info(f"[_handle_deploy][Coherence:OK] Deployment successful for dashboard {dashboard_id}.")
return {"status": "success", "message": f"Dashboard deployed to {env.name}", "details": result} return {"status": "success", "message": f"Dashboard deployed to {env.name}", "details": result}
finally: finally:
if temp_zip_path.exists(): if temp_zip_path.exists():
os.remove(temp_zip_path) os.remove(temp_zip_path)
except Exception as e: except Exception as e:
logger.error(f"[_handle_deploy][Coherence:Failed] Deployment failed: {e}") app_logger.error(f"[_handle_deploy][Coherence:Failed] Deployment failed: {e}")
raise raise
# [/DEF:_handle_deploy:Function] # [/DEF:_handle_deploy:Function]
@@ -323,13 +337,13 @@ class GitPlugin(PluginBase):
# @RETURN: Environment - Объект конфигурации окружения. # @RETURN: Environment - Объект конфигурации окружения.
def _get_env(self, env_id: Optional[str] = None): def _get_env(self, env_id: Optional[str] = None):
with belief_scope("GitPlugin._get_env"): with belief_scope("GitPlugin._get_env"):
logger.info(f"[_get_env][Entry] Fetching environment for ID: {env_id}") app_logger.info(f"[_get_env][Entry] Fetching environment for ID: {env_id}")
# Priority 1: ConfigManager (config.json) # Priority 1: ConfigManager (config.json)
if env_id: if env_id:
env = self.config_manager.get_environment(env_id) env = self.config_manager.get_environment(env_id)
if env: if env:
logger.info(f"[_get_env][Exit] Found environment by ID in ConfigManager: {env.name}") app_logger.info(f"[_get_env][Exit] Found environment by ID in ConfigManager: {env.name}")
return env return env
# Priority 2: Database (DeploymentEnvironment) # Priority 2: Database (DeploymentEnvironment)
@@ -342,12 +356,12 @@ class GitPlugin(PluginBase):
db_env = db.query(DeploymentEnvironment).filter(DeploymentEnvironment.id == env_id).first() db_env = db.query(DeploymentEnvironment).filter(DeploymentEnvironment.id == env_id).first()
else: else:
# If no ID, try to find active or any environment in DB # If no ID, try to find active or any environment in DB
db_env = db.query(DeploymentEnvironment).filter(DeploymentEnvironment.is_active == True).first() db_env = db.query(DeploymentEnvironment).filter(DeploymentEnvironment.is_active).first()
if not db_env: if not db_env:
db_env = db.query(DeploymentEnvironment).first() db_env = db.query(DeploymentEnvironment).first()
if db_env: if db_env:
logger.info(f"[_get_env][Exit] Found environment in DB: {db_env.name}") app_logger.info(f"[_get_env][Exit] Found environment in DB: {db_env.name}")
from src.core.config_models import Environment from src.core.config_models import Environment
# Use token as password for SupersetClient # Use token as password for SupersetClient
return Environment( return Environment(
@@ -369,14 +383,14 @@ class GitPlugin(PluginBase):
# but we have other envs, maybe it's one of them? # but we have other envs, maybe it's one of them?
env = next((e for e in envs if e.id == env_id), None) env = next((e for e in envs if e.id == env_id), None)
if env: if env:
logger.info(f"[_get_env][Exit] Found environment {env_id} in ConfigManager list") app_logger.info(f"[_get_env][Exit] Found environment {env_id} in ConfigManager list")
return env return env
if not env_id: if not env_id:
logger.info(f"[_get_env][Exit] Using first environment from ConfigManager: {envs[0].name}") app_logger.info(f"[_get_env][Exit] Using first environment from ConfigManager: {envs[0].name}")
return envs[0] return envs[0]
logger.error(f"[_get_env][Coherence:Failed] No environments configured (searched config.json and DB). env_id={env_id}") app_logger.error(f"[_get_env][Coherence:Failed] No environments configured (searched config.json and DB). env_id={env_id}")
raise ValueError("No environments configured. Please add a Superset Environment in Settings.") raise ValueError("No environments configured. Please add a Superset Environment in Settings.")
# [/DEF:_get_env:Function] # [/DEF:_get_env:Function]

View File

@@ -0,0 +1,14 @@
# [DEF:backend/src/plugins/llm_analysis/__init__.py:Module]
# @TIER: TRIVIAL
# @PURPOSE: Initialize the LLM Analysis plugin package.
# @LAYER: Domain
"""
LLM Analysis Plugin for automated dashboard validation and dataset documentation.
"""
from .plugin import DashboardValidationPlugin, DocumentationPlugin
__all__ = ['DashboardValidationPlugin', 'DocumentationPlugin']
# [/DEF:backend/src/plugins/llm_analysis/__init__.py:Module]

View File

@@ -0,0 +1,61 @@
# [DEF:backend/src/plugins/llm_analysis/models.py:Module]
# @TIER: STANDARD
# @SEMANTICS: pydantic, models, llm
# @PURPOSE: Define Pydantic models for LLM Analysis plugin.
# @LAYER: Domain
from typing import List, Optional
from pydantic import BaseModel, Field
from datetime import datetime
from enum import Enum
# [DEF:LLMProviderType:Class]
# @PURPOSE: Enum for supported LLM providers.
class LLMProviderType(str, Enum):
OPENAI = "openai"
OPENROUTER = "openrouter"
KILO = "kilo"
# [/DEF:LLMProviderType:Class]
# [DEF:LLMProviderConfig:Class]
# @PURPOSE: Configuration for an LLM provider.
class LLMProviderConfig(BaseModel):
id: Optional[str] = None
provider_type: LLMProviderType
name: str
base_url: str
api_key: Optional[str] = None
default_model: str
is_active: bool = True
# [/DEF:LLMProviderConfig:Class]
# [DEF:ValidationStatus:Class]
# @PURPOSE: Enum for dashboard validation status.
class ValidationStatus(str, Enum):
PASS = "PASS"
WARN = "WARN"
FAIL = "FAIL"
# [/DEF:ValidationStatus:Class]
# [DEF:DetectedIssue:Class]
# @PURPOSE: Model for a single issue detected during validation.
class DetectedIssue(BaseModel):
severity: ValidationStatus
message: str
location: Optional[str] = None
# [/DEF:DetectedIssue:Class]
# [DEF:ValidationResult:Class]
# @PURPOSE: Model for dashboard validation result.
class ValidationResult(BaseModel):
id: Optional[str] = None
dashboard_id: str
timestamp: datetime = Field(default_factory=datetime.utcnow)
status: ValidationStatus
screenshot_path: Optional[str] = None
issues: List[DetectedIssue]
summary: str
raw_response: Optional[str] = None
# [/DEF:ValidationResult:Class]
# [/DEF:backend/src/plugins/llm_analysis/models.py:Module]

View File

@@ -0,0 +1,391 @@
# [DEF:backend/src/plugins/llm_analysis/plugin.py:Module]
# @TIER: STANDARD
# @SEMANTICS: plugin, llm, analysis, documentation
# @PURPOSE: Implements DashboardValidationPlugin and DocumentationPlugin.
# @LAYER: Domain
# @RELATION: INHERITS -> backend.src.core.plugin_base.PluginBase
# @RELATION: CALLS -> backend.src.plugins.llm_analysis.service.ScreenshotService
# @RELATION: CALLS -> backend.src.plugins.llm_analysis.service.LLMClient
# @RELATION: CALLS -> backend.src.services.llm_provider.LLMProviderService
# @RELATION: USES -> TaskContext
# @INVARIANT: All LLM interactions must be executed as asynchronous tasks.
from typing import Dict, Any, Optional
import os
import json
from datetime import datetime, timedelta
from ...core.plugin_base import PluginBase
from ...core.logger import belief_scope, logger
from ...core.database import SessionLocal
from ...services.llm_provider import LLMProviderService
from ...core.superset_client import SupersetClient
from .service import ScreenshotService, LLMClient
from .models import LLMProviderType, ValidationStatus, ValidationResult, DetectedIssue
from ...models.llm import ValidationRecord
from ...core.task_manager.context import TaskContext
# [DEF:DashboardValidationPlugin:Class]
# @PURPOSE: Plugin for automated dashboard health analysis using LLMs.
# @RELATION: IMPLEMENTS -> backend.src.core.plugin_base.PluginBase
class DashboardValidationPlugin(PluginBase):
@property
def id(self) -> str:
return "llm_dashboard_validation"
@property
def name(self) -> str:
return "Dashboard LLM Validation"
@property
def description(self) -> str:
return "Automated dashboard health analysis using multimodal LLMs."
@property
def version(self) -> str:
return "1.0.0"
def get_schema(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"dashboard_id": {"type": "string", "title": "Dashboard ID"},
"environment_id": {"type": "string", "title": "Environment ID"},
"provider_id": {"type": "string", "title": "LLM Provider ID"}
},
"required": ["dashboard_id", "environment_id", "provider_id"]
}
# [DEF:DashboardValidationPlugin.execute:Function]
# @PURPOSE: Executes the dashboard validation task with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Validation parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: params contains dashboard_id, environment_id, and provider_id.
# @POST: Returns a dictionary with validation results and persists them to the database.
# @SIDE_EFFECT: Captures a screenshot, calls LLM API, and writes to the database.
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("execute", f"plugin_id={self.id}"):
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
llm_log = log.with_source("llm") if context else log
screenshot_log = log.with_source("screenshot") if context else log
superset_log = log.with_source("superset_api") if context else log
log.info(f"Executing {self.name} with params: {params}")
dashboard_id = params.get("dashboard_id")
env_id = params.get("environment_id")
provider_id = params.get("provider_id")
db = SessionLocal()
try:
# 1. Get Environment
from ...dependencies import get_config_manager
config_mgr = get_config_manager()
env = config_mgr.get_environment(env_id)
if not env:
log.error(f"Environment {env_id} not found")
raise ValueError(f"Environment {env_id} not found")
# 2. Get LLM Provider
llm_service = LLMProviderService(db)
db_provider = llm_service.get_provider(provider_id)
if not db_provider:
log.error(f"LLM Provider {provider_id} not found")
raise ValueError(f"LLM Provider {provider_id} not found")
llm_log.debug("Retrieved provider config:")
llm_log.debug(f" Provider ID: {db_provider.id}")
llm_log.debug(f" Provider Name: {db_provider.name}")
llm_log.debug(f" Provider Type: {db_provider.provider_type}")
llm_log.debug(f" Base URL: {db_provider.base_url}")
llm_log.debug(f" Default Model: {db_provider.default_model}")
llm_log.debug(f" Is Active: {db_provider.is_active}")
api_key = llm_service.get_decrypted_api_key(provider_id)
llm_log.debug(f"API Key decrypted (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
# Check if API key was successfully decrypted
if not api_key:
raise ValueError(
f"Failed to decrypt API key for provider {provider_id}. "
f"The provider may have been encrypted with a different encryption key. "
f"Please update the provider with a new API key through the UI."
)
# 3. Capture Screenshot
screenshot_service = ScreenshotService(env)
storage_root = config_mgr.get_config().settings.storage.root_path
screenshots_dir = os.path.join(storage_root, "screenshots")
os.makedirs(screenshots_dir, exist_ok=True)
filename = f"{dashboard_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.png"
screenshot_path = os.path.join(screenshots_dir, filename)
screenshot_log.info(f"Capturing screenshot for dashboard {dashboard_id}")
await screenshot_service.capture_dashboard(dashboard_id, screenshot_path)
screenshot_log.debug(f"Screenshot saved to: {screenshot_path}")
# 4. Fetch Logs (from Environment /api/v1/log/)
logs = []
try:
client = SupersetClient(env)
# Calculate time window (last 24 hours)
start_time = (datetime.now() - timedelta(hours=24)).isoformat()
# Construct filter for logs
# Note: We filter by dashboard_id matching the object
query_params = {
"filters": [
{"col": "dashboard_id", "opr": "eq", "value": dashboard_id},
{"col": "dttm", "opr": "gt", "value": start_time}
],
"order_column": "dttm",
"order_direction": "desc",
"page": 0,
"page_size": 100
}
superset_log.debug(f"Fetching logs for dashboard {dashboard_id}")
response = client.network.request(
method="GET",
endpoint="/log/",
params={"q": json.dumps(query_params)}
)
if isinstance(response, dict) and "result" in response:
for item in response["result"]:
action = item.get("action", "unknown")
dttm = item.get("dttm", "")
details = item.get("json", "")
logs.append(f"[{dttm}] {action}: {details}")
if not logs:
logs = ["No recent logs found for this dashboard."]
superset_log.debug("No recent logs found for this dashboard")
except Exception as e:
superset_log.warning(f"Failed to fetch logs from environment: {e}")
logs = [f"Error fetching remote logs: {str(e)}"]
# 5. Analyze with LLM
llm_client = LLMClient(
provider_type=LLMProviderType(db_provider.provider_type),
api_key=api_key,
base_url=db_provider.base_url,
default_model=db_provider.default_model
)
llm_log.info(f"Analyzing dashboard {dashboard_id} with LLM")
analysis = await llm_client.analyze_dashboard(screenshot_path, logs)
# Log analysis summary to task logs for better visibility
llm_log.info(f"[ANALYSIS_SUMMARY] Status: {analysis['status']}")
llm_log.info(f"[ANALYSIS_SUMMARY] Summary: {analysis['summary']}")
if analysis.get("issues"):
for i, issue in enumerate(analysis["issues"]):
llm_log.info(f"[ANALYSIS_ISSUE][{i+1}] {issue.get('severity')}: {issue.get('message')} (Location: {issue.get('location', 'N/A')})")
# 6. Persist Result
validation_result = ValidationResult(
dashboard_id=dashboard_id,
status=ValidationStatus(analysis["status"]),
summary=analysis["summary"],
issues=[DetectedIssue(**issue) for issue in analysis["issues"]],
screenshot_path=screenshot_path,
raw_response=str(analysis)
)
db_record = ValidationRecord(
dashboard_id=validation_result.dashboard_id,
status=validation_result.status.value,
summary=validation_result.summary,
issues=[issue.dict() for issue in validation_result.issues],
screenshot_path=validation_result.screenshot_path,
raw_response=validation_result.raw_response
)
db.add(db_record)
db.commit()
# 7. Notification on failure (US1 / FR-015)
if validation_result.status == ValidationStatus.FAIL:
log.warning(f"Dashboard {dashboard_id} validation FAILED. Summary: {validation_result.summary}")
# Placeholder for Email/Pulse notification dispatch
# In a real implementation, we would call a NotificationService here
# with a payload containing the summary and a link to the report.
# Final log to ensure all analysis is visible in task logs
log.info(f"Validation completed for dashboard {dashboard_id}. Status: {validation_result.status.value}")
return validation_result.dict()
finally:
db.close()
# [/DEF:DashboardValidationPlugin.execute:Function]
# [/DEF:DashboardValidationPlugin:Class]
# [DEF:DocumentationPlugin:Class]
# @PURPOSE: Plugin for automated dataset documentation using LLMs.
# @RELATION: IMPLEMENTS -> backend.src.core.plugin_base.PluginBase
class DocumentationPlugin(PluginBase):
@property
def id(self) -> str:
return "llm_documentation"
@property
def name(self) -> str:
return "Dataset LLM Documentation"
@property
def description(self) -> str:
return "Automated dataset and column documentation using LLMs."
@property
def version(self) -> str:
return "1.0.0"
def get_schema(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"dataset_id": {"type": "string", "title": "Dataset ID"},
"environment_id": {"type": "string", "title": "Environment ID"},
"provider_id": {"type": "string", "title": "LLM Provider ID"}
},
"required": ["dataset_id", "environment_id", "provider_id"]
}
# [DEF:DocumentationPlugin.execute:Function]
# @PURPOSE: Executes the dataset documentation task with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Documentation parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: params contains dataset_id, environment_id, and provider_id.
# @POST: Returns generated documentation and updates the dataset in Superset.
# @SIDE_EFFECT: Calls LLM API and updates dataset metadata in Superset.
async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("execute", f"plugin_id={self.id}"):
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
llm_log = log.with_source("llm") if context else log
superset_log = log.with_source("superset_api") if context else log
log.info(f"Executing {self.name} with params: {params}")
dataset_id = params.get("dataset_id")
env_id = params.get("environment_id")
provider_id = params.get("provider_id")
db = SessionLocal()
try:
# 1. Get Environment
from ...dependencies import get_config_manager
config_mgr = get_config_manager()
env = config_mgr.get_environment(env_id)
if not env:
log.error(f"Environment {env_id} not found")
raise ValueError(f"Environment {env_id} not found")
# 2. Get LLM Provider
llm_service = LLMProviderService(db)
db_provider = llm_service.get_provider(provider_id)
if not db_provider:
log.error(f"LLM Provider {provider_id} not found")
raise ValueError(f"LLM Provider {provider_id} not found")
llm_log.debug("Retrieved provider config:")
llm_log.debug(f" Provider ID: {db_provider.id}")
llm_log.debug(f" Provider Name: {db_provider.name}")
llm_log.debug(f" Provider Type: {db_provider.provider_type}")
llm_log.debug(f" Base URL: {db_provider.base_url}")
llm_log.debug(f" Default Model: {db_provider.default_model}")
api_key = llm_service.get_decrypted_api_key(provider_id)
llm_log.debug(f"API Key decrypted (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
# Check if API key was successfully decrypted
if not api_key:
raise ValueError(
f"Failed to decrypt API key for provider {provider_id}. "
f"The provider may have been encrypted with a different encryption key. "
f"Please update the provider with a new API key through the UI."
)
# 3. Fetch Metadata (US2 / T024)
from ...core.superset_client import SupersetClient
client = SupersetClient(env)
superset_log.debug(f"Fetching dataset {dataset_id}")
dataset = client.get_dataset(int(dataset_id))
# Extract columns and existing descriptions
columns_data = []
for col in dataset.get("columns", []):
columns_data.append({
"name": col.get("column_name"),
"type": col.get("type"),
"description": col.get("description")
})
superset_log.debug(f"Extracted {len(columns_data)} columns from dataset")
# 4. Construct Prompt & Analyze (US2 / T025)
llm_client = LLMClient(
provider_type=LLMProviderType(db_provider.provider_type),
api_key=api_key,
base_url=db_provider.base_url,
default_model=db_provider.default_model
)
prompt = f"""
Generate professional documentation for the following dataset and its columns.
Dataset: {dataset.get('table_name')}
Columns: {columns_data}
Provide the documentation in JSON format:
{{
"dataset_description": "General description of the dataset",
"column_descriptions": [
{{
"name": "column_name",
"description": "Generated description"
}}
]
}}
"""
# Using a generic chat completion for text-only US2
llm_log.info(f"Generating documentation for dataset {dataset_id}")
doc_result = await llm_client.get_json_completion([{"role": "user", "content": prompt}])
# 5. Update Metadata (US2 / T026)
update_payload = {
"description": doc_result["dataset_description"],
"columns": []
}
# Map generated descriptions back to column IDs
for col_doc in doc_result["column_descriptions"]:
for col in dataset.get("columns", []):
if col.get("column_name") == col_doc["name"]:
update_payload["columns"].append({
"id": col.get("id"),
"description": col_doc["description"]
})
superset_log.info(f"Updating dataset {dataset_id} with generated documentation")
client.update_dataset(int(dataset_id), update_payload)
log.info(f"Documentation completed for dataset {dataset_id}")
return doc_result
finally:
db.close()
# [/DEF:DocumentationPlugin.execute:Function]
# [/DEF:DocumentationPlugin:Class]
# [/DEF:backend/src/plugins/llm_analysis/plugin.py:Module]

View File

@@ -0,0 +1,62 @@
# [DEF:backend/src/plugins/llm_analysis/scheduler.py:Module]
# @TIER: STANDARD
# @SEMANTICS: scheduler, task, automation
# @PURPOSE: Provides helper functions to schedule LLM-based validation tasks.
# @LAYER: Domain
# @RELATION: DEPENDS_ON -> backend.src.core.scheduler
from typing import Dict, Any
from ...dependencies import get_task_manager, get_scheduler_service
from ...core.logger import belief_scope, logger
# [DEF:schedule_dashboard_validation:Function]
# @PURPOSE: Schedules a recurring dashboard validation task.
# @PARAM: dashboard_id (str) - ID of the dashboard to validate.
# @PARAM: cron_expression (str) - Standard cron expression for scheduling.
# @PARAM: params (Dict[str, Any]) - Task parameters (environment_id, provider_id).
# @SIDE_EFFECT: Adds a job to the scheduler service.
def schedule_dashboard_validation(dashboard_id: str, cron_expression: str, params: Dict[str, Any]):
with belief_scope("schedule_dashboard_validation", f"dashboard_id={dashboard_id}"):
scheduler = get_scheduler_service()
task_manager = get_task_manager()
job_id = f"llm_val_{dashboard_id}"
async def job_func():
await task_manager.create_task(
plugin_id="llm_dashboard_validation",
params={
"dashboard_id": dashboard_id,
**params
}
)
scheduler.add_job(
job_func,
"cron",
id=job_id,
replace_existing=True,
**_parse_cron(cron_expression)
)
logger.info(f"Scheduled validation for dashboard {dashboard_id} with cron {cron_expression}")
# [/DEF:schedule_dashboard_validation:Function]
# [DEF:_parse_cron:Function]
# @PURPOSE: Basic cron parser placeholder.
# @PARAM: cron (str) - Cron expression.
# @RETURN: Dict[str, str] - Parsed cron parts.
def _parse_cron(cron: str) -> Dict[str, str]:
# Basic cron parser placeholder
parts = cron.split()
if len(parts) != 5:
return {}
return {
"minute": parts[0],
"hour": parts[1],
"day": parts[2],
"month": parts[3],
"day_of_week": parts[4]
}
# [/DEF:_parse_cron:Function]
# [/DEF:backend/src/plugins/llm_analysis/scheduler.py:Module]

View File

@@ -0,0 +1,632 @@
# [DEF:backend/src/plugins/llm_analysis/service.py:Module]
# @TIER: STANDARD
# @SEMANTICS: service, llm, screenshot, playwright, openai
# @PURPOSE: Services for LLM interaction and dashboard screenshots.
# @LAYER: Domain
# @RELATION: DEPENDS_ON -> playwright
# @RELATION: DEPENDS_ON -> openai
# @RELATION: DEPENDS_ON -> tenacity
# @INVARIANT: Screenshots must be 1920px width and capture full page height.
import asyncio
import base64
import json
import io
from typing import List, Dict, Any
from PIL import Image
from playwright.async_api import async_playwright
from openai import AsyncOpenAI, RateLimitError, AuthenticationError as OpenAIAuthenticationError
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception
from .models import LLMProviderType
from ...core.logger import belief_scope, logger
from ...core.config_models import Environment
# [DEF:ScreenshotService:Class]
# @PURPOSE: Handles capturing screenshots of Superset dashboards.
class ScreenshotService:
# [DEF:ScreenshotService.__init__:Function]
# @PURPOSE: Initializes the ScreenshotService with environment configuration.
# @PRE: env is a valid Environment object.
def __init__(self, env: Environment):
self.env = env
# [/DEF:ScreenshotService.__init__:Function]
# [DEF:ScreenshotService.capture_dashboard:Function]
# @PURPOSE: Captures a full-page screenshot of a dashboard using Playwright and CDP.
# @PRE: dashboard_id is a valid string, output_path is a writable path.
# @POST: Returns True if screenshot is saved successfully.
# @SIDE_EFFECT: Launches a browser, performs UI login, switches tabs, and writes a PNG file.
# @UX_STATE: [Navigating] -> Loading dashboard UI
# @UX_STATE: [TabSwitching] -> Iterating through dashboard tabs to trigger lazy loading
# @UX_STATE: [CalculatingHeight] -> Determining dashboard dimensions
# @UX_STATE: [Capturing] -> Executing CDP screenshot
async def capture_dashboard(self, dashboard_id: str, output_path: str) -> bool:
with belief_scope("capture_dashboard", f"dashboard_id={dashboard_id}"):
logger.info(f"Capturing screenshot for dashboard {dashboard_id}")
async with async_playwright() as p:
browser = await p.chromium.launch(
headless=True,
args=[
"--disable-blink-features=AutomationControlled",
"--disable-infobars",
"--no-sandbox"
]
)
# Set a realistic user agent to avoid 403 Forbidden from OpenResty/WAF
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
# Construct base UI URL from environment (strip /api/v1 suffix)
base_ui_url = self.env.url.rstrip("/")
if base_ui_url.endswith("/api/v1"):
base_ui_url = base_ui_url[:-len("/api/v1")]
# Create browser context with realistic headers
context = await browser.new_context(
viewport={'width': 1280, 'height': 720},
user_agent=user_agent,
extra_http_headers={
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"Accept-Language": "ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7",
"Upgrade-Insecure-Requests": "1",
"Sec-Fetch-Dest": "document",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "none",
"Sec-Fetch-User": "?1"
}
)
logger.info("Browser context created successfully")
page = await context.new_page()
# Bypass navigator.webdriver detection
await page.add_init_script("delete Object.getPrototypeOf(navigator).webdriver")
# 1. Navigate to login page and authenticate
login_url = f"{base_ui_url.rstrip('/')}/login/"
logger.info(f"[DEBUG] Navigating to login page: {login_url}")
response = await page.goto(login_url, wait_until="networkidle", timeout=60000)
if response:
logger.info(f"[DEBUG] Login page response status: {response.status}")
# Wait for login form to be ready
await page.wait_for_load_state("domcontentloaded")
# More exhaustive list of selectors for various Superset versions/themes
selectors = {
"username": ['input[name="username"]', 'input#username', 'input[placeholder*="Username"]', 'input[type="text"]'],
"password": ['input[name="password"]', 'input#password', 'input[placeholder*="Password"]', 'input[type="password"]'],
"submit": ['button[type="submit"]', 'button#submit', '.btn-primary', 'input[type="submit"]']
}
logger.info("[DEBUG] Attempting to find login form elements...")
try:
# Find and fill username
u_selector = None
for s in selectors["username"]:
count = await page.locator(s).count()
logger.info(f"[DEBUG] Selector '{s}': {count} elements found")
if count > 0:
u_selector = s
break
if not u_selector:
# Log all input fields on the page for debugging
all_inputs = await page.locator('input').all()
logger.info(f"[DEBUG] Found {len(all_inputs)} input fields on page")
for i, inp in enumerate(all_inputs[:5]): # Log first 5
inp_type = await inp.get_attribute('type')
inp_name = await inp.get_attribute('name')
inp_id = await inp.get_attribute('id')
logger.info(f"[DEBUG] Input {i}: type={inp_type}, name={inp_name}, id={inp_id}")
raise RuntimeError("Could not find username input field on login page")
logger.info(f"[DEBUG] Filling username field with selector: {u_selector}")
await page.fill(u_selector, self.env.username)
# Find and fill password
p_selector = None
for s in selectors["password"]:
if await page.locator(s).count() > 0:
p_selector = s
break
if not p_selector:
raise RuntimeError("Could not find password input field on login page")
logger.info(f"[DEBUG] Filling password field with selector: {p_selector}")
await page.fill(p_selector, self.env.password)
# Click submit
s_selector = selectors["submit"][0]
for s in selectors["submit"]:
if await page.locator(s).count() > 0:
s_selector = s
break
logger.info(f"[DEBUG] Clicking submit button with selector: {s_selector}")
await page.click(s_selector)
# Wait for navigation after login
await page.wait_for_load_state("networkidle", timeout=30000)
# Check if login was successful
if "/login" in page.url:
# Check for error messages on page
error_msg = await page.locator(".alert-danger, .error-message").text_content() if await page.locator(".alert-danger, .error-message").count() > 0 else "Unknown error"
logger.error(f"[DEBUG] Login failed. Still on login page. Error: {error_msg}")
debug_path = output_path.replace(".png", "_debug_failed_login.png")
await page.screenshot(path=debug_path)
raise RuntimeError(f"Login failed: {error_msg}. Debug screenshot saved to {debug_path}")
logger.info(f"[DEBUG] Login successful. Current URL: {page.url}")
# Check cookies after successful login
page_cookies = await context.cookies()
logger.info(f"[DEBUG] Cookies after login: {len(page_cookies)}")
for c in page_cookies:
logger.info(f"[DEBUG] Cookie: name={c['name']}, domain={c['domain']}, value={c.get('value', '')[:20]}...")
except Exception as e:
page_title = await page.title()
logger.error(f"UI Login failed. Page title: {page_title}, URL: {page.url}, Error: {str(e)}")
debug_path = output_path.replace(".png", "_debug_failed_login.png")
await page.screenshot(path=debug_path)
raise RuntimeError(f"Login failed: {str(e)}. Debug screenshot saved to {debug_path}")
# 2. Navigate to dashboard
# @UX_STATE: [Navigating] -> Loading dashboard UI
dashboard_url = f"{base_ui_url.rstrip('/')}/superset/dashboard/{dashboard_id}/?standalone=true"
if base_ui_url.startswith("https://") and dashboard_url.startswith("http://"):
dashboard_url = dashboard_url.replace("http://", "https://")
logger.info(f"[DEBUG] Navigating to dashboard: {dashboard_url}")
# Use networkidle to ensure all initial assets are loaded
response = await page.goto(dashboard_url, wait_until="networkidle", timeout=60000)
if response:
logger.info(f"[DEBUG] Dashboard navigation response status: {response.status}, URL: {response.url}")
try:
# Wait for the dashboard grid to be present
await page.wait_for_selector('.dashboard-component, .dashboard-header, [data-test="dashboard-grid"]', timeout=30000)
logger.info("[DEBUG] Dashboard container loaded")
# Wait for charts to finish loading (Superset uses loading spinners/skeletons)
# We wait until loading indicators disappear or a timeout occurs
try:
# Wait for loading indicators to disappear
await page.wait_for_selector('.loading, .ant-skeleton, .spinner', state="hidden", timeout=60000)
logger.info("[DEBUG] Loading indicators hidden")
except Exception:
logger.warning("[DEBUG] Timeout waiting for loading indicators to hide")
# Wait for charts to actually render their content (e.g., ECharts, NVD3)
# We look for common chart containers that should have content
try:
await page.wait_for_selector('.chart-container canvas, .slice_container svg, .superset-chart-canvas, .grid-content .chart-container', timeout=60000)
logger.info("[DEBUG] Chart content detected")
except Exception:
logger.warning("[DEBUG] Timeout waiting for chart content")
# Additional check: wait for all chart containers to have non-empty content
logger.info("[DEBUG] Waiting for all charts to have rendered content...")
await page.wait_for_function("""() => {
const charts = document.querySelectorAll('.chart-container, .slice_container');
if (charts.length === 0) return true; // No charts to wait for
// Check if all charts have rendered content (canvas, svg, or non-empty div)
return Array.from(charts).every(chart => {
const hasCanvas = chart.querySelector('canvas') !== null;
const hasSvg = chart.querySelector('svg') !== null;
const hasContent = chart.innerText.trim().length > 0 || chart.children.length > 0;
return hasCanvas || hasSvg || hasContent;
});
}""", timeout=60000)
logger.info("[DEBUG] All charts have rendered content")
# Scroll to bottom and back to top to trigger lazy loading of all charts
logger.info("[DEBUG] Scrolling to trigger lazy loading...")
await page.evaluate("""async () => {
const delay = ms => new Promise(resolve => setTimeout(resolve, ms));
for (let i = 0; i < document.body.scrollHeight; i += 500) {
window.scrollTo(0, i);
await delay(100);
}
window.scrollTo(0, 0);
await delay(500);
}""")
except Exception as e:
logger.warning(f"[DEBUG] Dashboard content wait failed: {e}, proceeding anyway after delay")
# Final stabilization delay - increased for complex dashboards
logger.info("[DEBUG] Final stabilization delay...")
await asyncio.sleep(15)
# Logic to handle tabs and full-page capture
try:
# 1. Handle Tabs (Recursive switching)
# @UX_STATE: [TabSwitching] -> Iterating through dashboard tabs to trigger lazy loading
processed_tabs = set()
async def switch_tabs(depth=0):
if depth > 3:
return # Limit recursion depth
tab_selectors = [
'.ant-tabs-nav-list .ant-tabs-tab',
'.dashboard-component-tabs .ant-tabs-tab',
'[data-test="dashboard-component-tabs"] .ant-tabs-tab'
]
found_tabs = []
for selector in tab_selectors:
found_tabs = await page.locator(selector).all()
if found_tabs:
break
if found_tabs:
logger.info(f"[DEBUG][TabSwitching] Found {len(found_tabs)} tabs at depth {depth}")
for i, tab in enumerate(found_tabs):
try:
tab_text = (await tab.inner_text()).strip()
tab_id = f"{depth}_{i}_{tab_text}"
if tab_id in processed_tabs:
continue
if await tab.is_visible():
logger.info(f"[DEBUG][TabSwitching] Switching to tab: {tab_text}")
processed_tabs.add(tab_id)
is_active = "ant-tabs-tab-active" in (await tab.get_attribute("class") or "")
if not is_active:
await tab.click()
await asyncio.sleep(2) # Wait for content to render
await switch_tabs(depth + 1)
except Exception as tab_e:
logger.warning(f"[DEBUG][TabSwitching] Failed to process tab {i}: {tab_e}")
try:
first_tab = found_tabs[0]
if "ant-tabs-tab-active" not in (await first_tab.get_attribute("class") or ""):
await first_tab.click()
await asyncio.sleep(1)
except Exception:
pass
await switch_tabs()
# 2. Calculate full height for screenshot
# @UX_STATE: [CalculatingHeight] -> Determining dashboard dimensions
full_height = await page.evaluate("""() => {
const body = document.body;
const html = document.documentElement;
const dashboardContent = document.querySelector('.dashboard-content');
return Math.max(
body.scrollHeight, body.offsetHeight,
html.clientHeight, html.scrollHeight, html.offsetHeight,
dashboardContent ? dashboardContent.scrollHeight + 100 : 0
);
}""")
logger.info(f"[DEBUG] Calculated full height: {full_height}")
# DIAGNOSTIC: Count chart elements before resize
chart_count_before = await page.evaluate("""() => {
return {
chartContainers: document.querySelectorAll('.chart-container, .slice_container').length,
canvasElements: document.querySelectorAll('canvas').length,
svgElements: document.querySelectorAll('.chart-container svg, .slice_container svg').length,
visibleCharts: document.querySelectorAll('.chart-container:visible, .slice_container:visible').length
};
}""")
logger.info(f"[DIAGNOSTIC] Chart elements BEFORE viewport resize: {chart_count_before}")
# DIAGNOSTIC: Capture pre-resize screenshot for comparison
pre_resize_path = output_path.replace(".png", "_preresize.png")
try:
await page.screenshot(path=pre_resize_path, full_page=False, timeout=10000)
import os
pre_resize_size = os.path.getsize(pre_resize_path) if os.path.exists(pre_resize_path) else 0
logger.info(f"[DIAGNOSTIC] Pre-resize screenshot saved: {pre_resize_path} ({pre_resize_size} bytes)")
except Exception as pre_e:
logger.warning(f"[DIAGNOSTIC] Failed to capture pre-resize screenshot: {pre_e}")
logger.info(f"[DIAGNOSTIC] Resizing viewport from current to 1920x{int(full_height)}")
await page.set_viewport_size({"width": 1920, "height": int(full_height)})
# DIAGNOSTIC: Increased wait time and log timing
logger.info("[DIAGNOSTIC] Waiting 10 seconds after viewport resize for re-render...")
await asyncio.sleep(10)
logger.info("[DIAGNOSTIC] Wait completed")
# DIAGNOSTIC: Count chart elements after resize and wait
chart_count_after = await page.evaluate("""() => {
return {
chartContainers: document.querySelectorAll('.chart-container, .slice_container').length,
canvasElements: document.querySelectorAll('canvas').length,
svgElements: document.querySelectorAll('.chart-container svg, .slice_container svg').length,
visibleCharts: document.querySelectorAll('.chart-container:visible, .slice_container:visible').length
};
}""")
logger.info(f"[DIAGNOSTIC] Chart elements AFTER viewport resize + wait: {chart_count_after}")
# DIAGNOSTIC: Check if any charts have error states
chart_errors = await page.evaluate("""() => {
const errors = [];
document.querySelectorAll('.chart-container, .slice_container').forEach((chart, i) => {
const errorEl = chart.querySelector('.error, .alert-danger, .ant-alert-error');
if (errorEl) {
errors.push({index: i, text: errorEl.innerText.substring(0, 100)});
}
});
return errors;
}""")
if chart_errors:
logger.warning(f"[DIAGNOSTIC] Charts with error states detected: {chart_errors}")
else:
logger.info("[DIAGNOSTIC] No chart error states detected")
# 3. Take screenshot using CDP to bypass Playwright's font loading wait
# @UX_STATE: [Capturing] -> Executing CDP screenshot
logger.info("[DEBUG] Attempting full-page screenshot via CDP...")
cdp = await page.context.new_cdp_session(page)
screenshot_data = await cdp.send("Page.captureScreenshot", {
"format": "png",
"fromSurface": True,
"captureBeyondViewport": True
})
image_data = base64.b64decode(screenshot_data["data"])
with open(output_path, 'wb') as f:
f.write(image_data)
# DIAGNOSTIC: Verify screenshot file
import os
final_size = os.path.getsize(output_path) if os.path.exists(output_path) else 0
logger.info(f"[DIAGNOSTIC] Final screenshot saved: {output_path}")
logger.info(f"[DIAGNOSTIC] Final screenshot size: {final_size} bytes ({final_size / 1024:.2f} KB)")
# DIAGNOSTIC: Get image dimensions
try:
with Image.open(output_path) as final_img:
logger.info(f"[DIAGNOSTIC] Final screenshot dimensions: {final_img.width}x{final_img.height}")
except Exception as img_err:
logger.warning(f"[DIAGNOSTIC] Could not read final image dimensions: {img_err}")
logger.info(f"Full-page screenshot saved to {output_path} (via CDP)")
except Exception as e:
logger.error(f"[DEBUG] Full-page/Tab capture failed: {e}")
try:
await page.screenshot(path=output_path, full_page=True, timeout=10000)
except Exception as e2:
logger.error(f"[DEBUG] Fallback screenshot also failed: {e2}")
await page.screenshot(path=output_path, timeout=5000)
await browser.close()
return True
# [/DEF:ScreenshotService.capture_dashboard:Function]
# [/DEF:ScreenshotService:Class]
# [DEF:LLMClient:Class]
# @PURPOSE: Wrapper for LLM provider APIs.
class LLMClient:
# [DEF:LLMClient.__init__:Function]
# @PURPOSE: Initializes the LLMClient with provider settings.
# @PRE: api_key, base_url, and default_model are non-empty strings.
def __init__(self, provider_type: LLMProviderType, api_key: str, base_url: str, default_model: str):
self.provider_type = provider_type
self.api_key = api_key
self.base_url = base_url
self.default_model = default_model
# DEBUG: Log initialization parameters (without exposing full API key)
logger.info("[LLMClient.__init__] Initializing LLM client:")
logger.info(f"[LLMClient.__init__] Provider Type: {provider_type}")
logger.info(f"[LLMClient.__init__] Base URL: {base_url}")
logger.info(f"[LLMClient.__init__] Default Model: {default_model}")
logger.info(f"[LLMClient.__init__] API Key (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
logger.info(f"[LLMClient.__init__] API Key Length: {len(api_key) if api_key else 0}")
self.client = AsyncOpenAI(api_key=api_key, base_url=base_url)
# [/DEF:LLMClient.__init__:Function]
# [DEF:LLMClient.get_json_completion:Function]
# @PURPOSE: Helper to handle LLM calls with JSON mode and fallback parsing.
# @PRE: messages is a list of valid message dictionaries.
# @POST: Returns a parsed JSON dictionary.
# @SIDE_EFFECT: Calls external LLM API.
def _should_retry(exception: Exception) -> bool:
"""Custom retry predicate that excludes authentication errors."""
# Don't retry on authentication errors
if isinstance(exception, OpenAIAuthenticationError):
return False
# Retry on rate limit errors and other exceptions
return isinstance(exception, (RateLimitError, Exception))
@retry(
stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=2, min=5, max=60),
retry=retry_if_exception(_should_retry),
reraise=True
)
async def get_json_completion(self, messages: List[Dict[str, Any]]) -> Dict[str, Any]:
with belief_scope("get_json_completion"):
response = None
try:
try:
logger.info(f"[get_json_completion] Attempting LLM call with JSON mode for model: {self.default_model}")
logger.info(f"[get_json_completion] Base URL being used: {self.base_url}")
logger.info(f"[get_json_completion] Number of messages: {len(messages)}")
logger.info(f"[get_json_completion] API Key present: {bool(self.api_key and len(self.api_key) > 0)}")
response = await self.client.chat.completions.create(
model=self.default_model,
messages=messages,
response_format={"type": "json_object"}
)
except Exception as e:
if "JSON mode is not enabled" in str(e) or "400" in str(e):
logger.warning(f"[get_json_completion] JSON mode failed or not supported: {str(e)}. Falling back to plain text response.")
response = await self.client.chat.completions.create(
model=self.default_model,
messages=messages
)
else:
raise e
logger.debug(f"[get_json_completion] LLM Response: {response}")
except OpenAIAuthenticationError as e:
logger.error(f"[get_json_completion] Authentication error: {str(e)}")
# Do not retry on auth errors - re-raise to stop retry
raise
except RateLimitError as e:
logger.warning(f"[get_json_completion] Rate limit hit: {str(e)}")
# Extract retry_delay from error metadata if available
retry_delay = 5.0 # Default fallback
try:
# Based on logs, the raw response is in e.body or e.response.json()
# The logs show 'metadata': {'raw': '...'} which suggests a proxy or specific client wrapper
# Let's try to find the 'retryDelay' in the error message or response
import re
# Try to find "retryDelay": "XXs" in the string representation of the error
error_str = str(e)
match = re.search(r'"retryDelay":\s*"(\d+)s"', error_str)
if match:
retry_delay = float(match.group(1))
else:
# Try to parse from response if it's a standard OpenAI-like error with body
if hasattr(e, 'body') and isinstance(e.body, dict):
# Some providers put it in details
details = e.body.get('error', {}).get('details', [])
for detail in details:
if detail.get('@type') == 'type.googleapis.com/google.rpc.RetryInfo':
delay_str = detail.get('retryDelay', '5s')
retry_delay = float(delay_str.rstrip('s'))
break
except Exception as parse_e:
logger.debug(f"[get_json_completion] Failed to parse retry delay: {parse_e}")
# Add a small safety margin (0.5s) as requested
wait_time = retry_delay + 0.5
logger.info(f"[get_json_completion] Waiting for {wait_time}s before retry...")
await asyncio.sleep(wait_time)
raise
except Exception as e:
logger.error(f"[get_json_completion] LLM call failed: {str(e)}")
raise
if not response or not hasattr(response, 'choices') or not response.choices:
raise RuntimeError(f"Invalid LLM response: {response}")
content = response.choices[0].message.content
logger.debug(f"[get_json_completion] Raw content to parse: {content}")
try:
return json.loads(content)
except json.JSONDecodeError:
logger.warning("[get_json_completion] Failed to parse JSON directly, attempting to extract from code blocks")
if "```json" in content:
json_str = content.split("```json")[1].split("```")[0].strip()
return json.loads(json_str)
elif "```" in content:
json_str = content.split("```")[1].split("```")[0].strip()
return json.loads(json_str)
else:
raise
# [/DEF:LLMClient.get_json_completion:Function]
# [DEF:LLMClient.analyze_dashboard:Function]
# @PURPOSE: Sends dashboard data (screenshot + logs) to LLM for health analysis.
# @PRE: screenshot_path exists, logs is a list of strings.
# @POST: Returns a structured analysis dictionary (status, summary, issues).
# @SIDE_EFFECT: Reads screenshot file and calls external LLM API.
async def analyze_dashboard(self, screenshot_path: str, logs: List[str]) -> Dict[str, Any]:
with belief_scope("analyze_dashboard"):
# Optimize image to reduce token count (US1 / T023)
# Gemini/Gemma models have limits on input tokens, and large images contribute significantly.
try:
with Image.open(screenshot_path) as img:
# Convert to RGB if necessary
if img.mode in ("RGBA", "P"):
img = img.convert("RGB")
# Resize if too large (max 1024px width while maintaining aspect ratio)
# We reduce width further to 1024px to stay within token limits for long dashboards
max_width = 1024
if img.width > max_width or img.height > 2048:
# Calculate scaling factor to fit within 1024x2048
scale = min(max_width / img.width, 2048 / img.height)
if scale < 1.0:
new_width = int(img.width * scale)
new_height = int(img.height * scale)
img = img.resize((new_width, new_height), Image.Resampling.LANCZOS)
logger.info(f"[analyze_dashboard] Resized image from {img.width}x{img.height} to {new_width}x{new_height}")
# Compress and convert to base64
buffer = io.BytesIO()
# Lower quality to 60% to further reduce payload size
img.save(buffer, format="JPEG", quality=60, optimize=True)
base_64_image = base64.b64encode(buffer.getvalue()).decode('utf-8')
logger.info(f"[analyze_dashboard] Optimized image size: {len(buffer.getvalue()) / 1024:.2f} KB")
except Exception as img_e:
logger.warning(f"[analyze_dashboard] Image optimization failed: {img_e}. Using raw image.")
with open(screenshot_path, "rb") as image_file:
base_64_image = base64.b64encode(image_file.read()).decode('utf-8')
log_text = "\n".join(logs)
prompt = f"""
Analyze the attached dashboard screenshot and the following execution logs for health and visual issues.
Logs:
{log_text}
Provide the analysis in JSON format with the following structure:
{{
"status": "PASS" | "WARN" | "FAIL",
"summary": "Short summary of findings",
"issues": [
{{
"severity": "WARN" | "FAIL",
"message": "Description of the issue",
"location": "Optional location info (e.g. chart name)"
}}
]
}}
"""
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": prompt},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base_64_image}"
}
}
]
}
]
try:
return await self.get_json_completion(messages)
except Exception as e:
logger.error(f"[analyze_dashboard] Failed to get analysis: {str(e)}")
return {
"status": "FAIL",
"summary": f"Failed to get response from LLM: {str(e)}",
"issues": [{"severity": "FAIL", "message": "LLM provider returned empty or invalid response"}]
}
# [/DEF:LLMClient.analyze_dashboard:Function]
# [/DEF:LLMClient:Class]
# [/DEF:backend/src/plugins/llm_analysis/service.py:Module]

View File

@@ -3,6 +3,7 @@
# @PURPOSE: Implements a plugin for mapping dataset columns using external database connections or Excel files. # @PURPOSE: Implements a plugin for mapping dataset columns using external database connections or Excel files.
# @LAYER: Plugins # @LAYER: Plugins
# @RELATION: Inherits from PluginBase. Uses DatasetMapper from superset_tool. # @RELATION: Inherits from PluginBase. Uses DatasetMapper from superset_tool.
# @RELATION: USES -> TaskContext
# @CONSTRAINT: Must use belief_scope for logging. # @CONSTRAINT: Must use belief_scope for logging.
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
@@ -13,6 +14,7 @@ from ..core.logger import logger, belief_scope
from ..core.database import SessionLocal from ..core.database import SessionLocal
from ..models.connection import ConnectionConfig from ..models.connection import ConnectionConfig
from ..core.utils.dataset_mapper import DatasetMapper from ..core.utils.dataset_mapper import DatasetMapper
from ..core.task_manager.context import TaskContext
# [/SECTION] # [/SECTION]
# [DEF:MapperPlugin:Class] # [DEF:MapperPlugin:Class]
@@ -128,19 +130,27 @@ class MapperPlugin(PluginBase):
# [/DEF:get_schema:Function] # [/DEF:get_schema:Function]
# [DEF:execute:Function] # [DEF:execute:Function]
# @PURPOSE: Executes the dataset mapping logic. # @PURPOSE: Executes the dataset mapping logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Mapping parameters. # @PARAM: params (Dict[str, Any]) - Mapping parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Params contain valid 'env', 'dataset_id', and 'source'. params must be a dictionary. # @PRE: Params contain valid 'env', 'dataset_id', and 'source'. params must be a dictionary.
# @POST: Updates the dataset in Superset. # @POST: Updates the dataset in Superset.
# @RETURN: Dict[str, Any] - Execution status. # @RETURN: Dict[str, Any] - Execution status.
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]: async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("execute"): with belief_scope("execute"):
env_name = params.get("env") env_name = params.get("env")
dataset_id = params.get("dataset_id") dataset_id = params.get("dataset_id")
source = params.get("source") source = params.get("source")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
superset_log = log.with_source("superset_api") if context else log
db_log = log.with_source("postgres") if context else log
if not env_name or dataset_id is None or not source: if not env_name or dataset_id is None or not source:
logger.error("[MapperPlugin.execute][State] Missing required parameters.") log.error("Missing required parameters: env, dataset_id, source")
raise ValueError("Missing required parameters: env, dataset_id, source") raise ValueError("Missing required parameters: env, dataset_id, source")
# Get config and initialize client # Get config and initialize client
@@ -148,7 +158,7 @@ class MapperPlugin(PluginBase):
config_manager = get_config_manager() config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name) env_config = config_manager.get_environment(env_name)
if not env_config: if not env_config:
logger.error(f"[MapperPlugin.execute][State] Environment '{env_name}' not found.") log.error(f"Environment '{env_name}' not found in configuration.")
raise ValueError(f"Environment '{env_name}' not found in configuration.") raise ValueError(f"Environment '{env_name}' not found in configuration.")
client = SupersetClient(env_config) client = SupersetClient(env_config)
@@ -158,7 +168,7 @@ class MapperPlugin(PluginBase):
if source == "postgres": if source == "postgres":
connection_id = params.get("connection_id") connection_id = params.get("connection_id")
if not connection_id: if not connection_id:
logger.error("[MapperPlugin.execute][State] connection_id is required for postgres source.") log.error("connection_id is required for postgres source.")
raise ValueError("connection_id is required for postgres source.") raise ValueError("connection_id is required for postgres source.")
# Load connection from DB # Load connection from DB
@@ -166,7 +176,7 @@ class MapperPlugin(PluginBase):
try: try:
conn_config = db.query(ConnectionConfig).filter(ConnectionConfig.id == connection_id).first() conn_config = db.query(ConnectionConfig).filter(ConnectionConfig.id == connection_id).first()
if not conn_config: if not conn_config:
logger.error(f"[MapperPlugin.execute][State] Connection {connection_id} not found.") db_log.error(f"Connection {connection_id} not found.")
raise ValueError(f"Connection {connection_id} not found.") raise ValueError(f"Connection {connection_id} not found.")
postgres_config = { postgres_config = {
@@ -176,10 +186,11 @@ class MapperPlugin(PluginBase):
'host': conn_config.host, 'host': conn_config.host,
'port': str(conn_config.port) if conn_config.port else '5432' 'port': str(conn_config.port) if conn_config.port else '5432'
} }
db_log.debug(f"Loaded connection config for {conn_config.host}:{conn_config.port}/{conn_config.database}")
finally: finally:
db.close() db.close()
logger.info(f"[MapperPlugin.execute][Action] Starting mapping for dataset {dataset_id} in {env_name}") log.info(f"Starting mapping for dataset {dataset_id} in {env_name}")
mapper = DatasetMapper() mapper = DatasetMapper()
@@ -193,10 +204,10 @@ class MapperPlugin(PluginBase):
table_name=params.get("table_name"), table_name=params.get("table_name"),
table_schema=params.get("table_schema") or "public" table_schema=params.get("table_schema") or "public"
) )
logger.info(f"[MapperPlugin.execute][Success] Mapping completed for dataset {dataset_id}") superset_log.info(f"Mapping completed for dataset {dataset_id}")
return {"status": "success", "dataset_id": dataset_id} return {"status": "success", "dataset_id": dataset_id}
except Exception as e: except Exception as e:
logger.error(f"[MapperPlugin.execute][Failure] Mapping failed: {e}") log.error(f"Mapping failed: {e}")
raise raise
# [/DEF:execute:Function] # [/DEF:execute:Function]

View File

@@ -5,20 +5,20 @@
# @RELATION: IMPLEMENTS -> PluginBase # @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> superset_tool.client # @RELATION: DEPENDS_ON -> superset_tool.client
# @RELATION: DEPENDS_ON -> superset_tool.utils # @RELATION: DEPENDS_ON -> superset_tool.utils
# @RELATION: USES -> TaskContext
from typing import Dict, Any, List from typing import Dict, Any, Optional
from pathlib import Path
import zipfile
import re import re
from ..core.plugin_base import PluginBase from ..core.plugin_base import PluginBase
from ..core.logger import belief_scope from ..core.logger import belief_scope, logger as app_logger
from ..core.superset_client import SupersetClient from ..core.superset_client import SupersetClient
from ..core.utils.fileio import create_temp_file, update_yamls, create_dashboard_export from ..core.utils.fileio import create_temp_file
from ..dependencies import get_config_manager from ..dependencies import get_config_manager
from ..core.migration_engine import MigrationEngine from ..core.migration_engine import MigrationEngine
from ..core.database import SessionLocal from ..core.database import SessionLocal
from ..models.mapping import DatabaseMapping, Environment from ..models.mapping import DatabaseMapping, Environment
from ..core.task_manager.context import TaskContext
# [DEF:MigrationPlugin:Class] # [DEF:MigrationPlugin:Class]
# @PURPOSE: Implementation of the migration plugin logic. # @PURPOSE: Implementation of the migration plugin logic.
@@ -132,11 +132,12 @@ class MigrationPlugin(PluginBase):
# [/DEF:get_schema:Function] # [/DEF:get_schema:Function]
# [DEF:execute:Function] # [DEF:execute:Function]
# @PURPOSE: Executes the dashboard migration logic. # @PURPOSE: Executes the dashboard migration logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Migration parameters. # @PARAM: params (Dict[str, Any]) - Migration parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Source and target environments must be configured. # @PRE: Source and target environments must be configured.
# @POST: Selected dashboards are migrated. # @POST: Selected dashboards are migrated.
async def execute(self, params: Dict[str, Any]): async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("MigrationPlugin.execute"): with belief_scope("MigrationPlugin.execute"):
source_env_id = params.get("source_env_id") source_env_id = params.get("source_env_id")
target_env_id = params.get("target_env_id") target_env_id = params.get("target_env_id")
@@ -148,8 +149,8 @@ class MigrationPlugin(PluginBase):
dashboard_regex = params.get("dashboard_regex") dashboard_regex = params.get("dashboard_regex")
replace_db_config = params.get("replace_db_config", False) replace_db_config = params.get("replace_db_config", False)
from_db_id = params.get("from_db_id") params.get("from_db_id")
to_db_id = params.get("to_db_id") params.get("to_db_id")
# [DEF:MigrationPlugin.execute:Action] # [DEF:MigrationPlugin.execute:Action]
# @PURPOSE: Execute the migration logic with proper task logging. # @PURPOSE: Execute the migration logic with proper task logging.
@@ -157,74 +158,15 @@ class MigrationPlugin(PluginBase):
from ..dependencies import get_task_manager from ..dependencies import get_task_manager
tm = get_task_manager() tm = get_task_manager()
class TaskLoggerProxy: # Use TaskContext logger if available, otherwise fall back to app_logger
# [DEF:__init__:Function] log = context.logger if context else app_logger
# @PURPOSE: Initializes the proxy logger.
# @PRE: None. # Create sub-loggers for different components
# @POST: Instance is initialized. superset_log = log.with_source("superset_api") if context else log
def __init__(self): migration_log = log.with_source("migration") if context else log
with belief_scope("__init__"):
# Initialize parent with dummy values since we override methods log.info("Starting migration task.")
pass log.debug(f"Params: {params}")
# [/DEF:__init__:Function]
# [DEF:debug:Function]
# @PURPOSE: Logs a debug message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def debug(self, msg, *args, extra=None, **kwargs):
with belief_scope("debug"):
if task_id: tm._add_log(task_id, "DEBUG", msg, extra or {})
# [/DEF:debug:Function]
# [DEF:info:Function]
# @PURPOSE: Logs an info message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def info(self, msg, *args, extra=None, **kwargs):
with belief_scope("info"):
if task_id: tm._add_log(task_id, "INFO", msg, extra or {})
# [/DEF:info:Function]
# [DEF:warning:Function]
# @PURPOSE: Logs a warning message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def warning(self, msg, *args, extra=None, **kwargs):
with belief_scope("warning"):
if task_id: tm._add_log(task_id, "WARNING", msg, extra or {})
# [/DEF:warning:Function]
# [DEF:error:Function]
# @PURPOSE: Logs an error message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def error(self, msg, *args, extra=None, **kwargs):
with belief_scope("error"):
if task_id: tm._add_log(task_id, "ERROR", msg, extra or {})
# [/DEF:error:Function]
# [DEF:critical:Function]
# @PURPOSE: Logs a critical message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def critical(self, msg, *args, extra=None, **kwargs):
with belief_scope("critical"):
if task_id: tm._add_log(task_id, "ERROR", msg, extra or {})
# [/DEF:critical:Function]
# [DEF:exception:Function]
# @PURPOSE: Logs an exception message to the task manager.
# @PRE: msg is a string.
# @POST: Log is added to task manager if task_id exists.
def exception(self, msg, *args, **kwargs):
with belief_scope("exception"):
if task_id: tm._add_log(task_id, "ERROR", msg, {"exception": True})
# [/DEF:exception:Function]
logger = TaskLoggerProxy()
logger.info(f"[MigrationPlugin][Entry] Starting migration task.")
logger.info(f"[MigrationPlugin][Action] Params: {params}")
try: try:
with belief_scope("execute"): with belief_scope("execute"):
@@ -251,7 +193,7 @@ class MigrationPlugin(PluginBase):
from_env_name = src_env.name from_env_name = src_env.name
to_env_name = tgt_env.name to_env_name = tgt_env.name
logger.info(f"[MigrationPlugin][State] Resolved environments: {from_env_name} -> {to_env_name}") log.info(f"Resolved environments: {from_env_name} -> {to_env_name}")
from_c = SupersetClient(src_env) from_c = SupersetClient(src_env)
to_c = SupersetClient(tgt_env) to_c = SupersetClient(tgt_env)
@@ -270,11 +212,11 @@ class MigrationPlugin(PluginBase):
d for d in all_dashboards if re.search(regex_str, d["dashboard_title"], re.IGNORECASE) d for d in all_dashboards if re.search(regex_str, d["dashboard_title"], re.IGNORECASE)
] ]
else: else:
logger.warning("[MigrationPlugin][State] No selection criteria provided (selected_ids or dashboard_regex).") log.warning("No selection criteria provided (selected_ids or dashboard_regex).")
return return
if not dashboards_to_migrate: if not dashboards_to_migrate:
logger.warning("[MigrationPlugin][State] No dashboards found matching criteria.") log.warning("No dashboards found matching criteria.")
return return
# Fetch mappings from database # Fetch mappings from database
@@ -292,7 +234,7 @@ class MigrationPlugin(PluginBase):
DatabaseMapping.target_env_id == tgt_env.id DatabaseMapping.target_env_id == tgt_env.id
).all() ).all()
db_mapping = {m.source_db_uuid: m.target_db_uuid for m in mappings} db_mapping = {m.source_db_uuid: m.target_db_uuid for m in mappings}
logger.info(f"[MigrationPlugin][State] Loaded {len(db_mapping)} database mappings.") log.info(f"Loaded {len(db_mapping)} database mappings.")
finally: finally:
db.close() db.close()
@@ -311,7 +253,7 @@ class MigrationPlugin(PluginBase):
if not success and replace_db_config: if not success and replace_db_config:
# Signal missing mapping and wait (only if we care about mappings) # Signal missing mapping and wait (only if we care about mappings)
if task_id: if task_id:
logger.info(f"[MigrationPlugin][Action] Pausing for missing mapping in task {task_id}") log.info(f"Pausing for missing mapping in task {task_id}")
# In a real scenario, we'd pass the missing DB info to the frontend # In a real scenario, we'd pass the missing DB info to the frontend
# For this task, we'll just simulate the wait # For this task, we'll just simulate the wait
await tm.wait_for_resolution(task_id) await tm.wait_for_resolution(task_id)
@@ -333,9 +275,9 @@ class MigrationPlugin(PluginBase):
if success: if success:
to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug) to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug)
else: else:
logger.error(f"[MigrationPlugin][Failure] Failed to transform ZIP for dashboard {title}") migration_log.error(f"Failed to transform ZIP for dashboard {title}")
logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported.") superset_log.info(f"Dashboard {title} imported.")
except Exception as exc: except Exception as exc:
# Check for password error # Check for password error
error_msg = str(exc) error_msg = str(exc)
@@ -357,7 +299,7 @@ class MigrationPlugin(PluginBase):
if match_alt: if match_alt:
db_name = match_alt.group(1) db_name = match_alt.group(1)
logger.warning(f"[MigrationPlugin][Action] Detected missing password for database: {db_name}") app_logger.warning(f"[MigrationPlugin][Action] Detected missing password for database: {db_name}")
if task_id: if task_id:
input_request = { input_request = {
@@ -376,19 +318,19 @@ class MigrationPlugin(PluginBase):
# Retry import with password # Retry import with password
if passwords: if passwords:
logger.info(f"[MigrationPlugin][Action] Retrying import for {title} with provided passwords.") app_logger.info(f"[MigrationPlugin][Action] Retrying import for {title} with provided passwords.")
to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug, passwords=passwords) to_c.import_dashboard(file_name=tmp_new_zip, dash_id=dash_id, dash_slug=dash_slug, passwords=passwords)
logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported after password injection.") app_logger.info(f"[MigrationPlugin][Success] Dashboard {title} imported after password injection.")
# Clear passwords from params after use for security # Clear passwords from params after use for security
if "passwords" in task.params: if "passwords" in task.params:
del task.params["passwords"] del task.params["passwords"]
continue continue
logger.error(f"[MigrationPlugin][Failure] Failed to migrate dashboard {title}: {exc}", exc_info=True) app_logger.error(f"[MigrationPlugin][Failure] Failed to migrate dashboard {title}: {exc}", exc_info=True)
logger.info("[MigrationPlugin][Exit] Migration finished.") app_logger.info("[MigrationPlugin][Exit] Migration finished.")
except Exception as e: except Exception as e:
logger.critical(f"[MigrationPlugin][Failure] Fatal error during migration: {e}", exc_info=True) app_logger.critical(f"[MigrationPlugin][Failure] Fatal error during migration: {e}", exc_info=True)
raise e raise e
# [/DEF:MigrationPlugin.execute:Action] # [/DEF:MigrationPlugin.execute:Action]
# [/DEF:execute:Function] # [/DEF:execute:Function]

View File

@@ -3,14 +3,16 @@
# @PURPOSE: Implements a plugin for searching text patterns across all datasets in a specific Superset environment. # @PURPOSE: Implements a plugin for searching text patterns across all datasets in a specific Superset environment.
# @LAYER: Plugins # @LAYER: Plugins
# @RELATION: Inherits from PluginBase. Uses SupersetClient from core. # @RELATION: Inherits from PluginBase. Uses SupersetClient from core.
# @RELATION: USES -> TaskContext
# @CONSTRAINT: Must use belief_scope for logging. # @CONSTRAINT: Must use belief_scope for logging.
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
import re import re
from typing import Dict, Any, List, Optional from typing import Dict, Any, Optional
from ..core.plugin_base import PluginBase from ..core.plugin_base import PluginBase
from ..core.superset_client import SupersetClient from ..core.superset_client import SupersetClient
from ..core.logger import logger, belief_scope from ..core.logger import logger, belief_scope
from ..core.task_manager.context import TaskContext
# [/SECTION] # [/SECTION]
# [DEF:SearchPlugin:Class] # [DEF:SearchPlugin:Class]
@@ -99,18 +101,26 @@ class SearchPlugin(PluginBase):
# [/DEF:get_schema:Function] # [/DEF:get_schema:Function]
# [DEF:execute:Function] # [DEF:execute:Function]
# @PURPOSE: Executes the dataset search logic. # @PURPOSE: Executes the dataset search logic with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Search parameters. # @PARAM: params (Dict[str, Any]) - Search parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: Params contain valid 'env' and 'query'. # @PRE: Params contain valid 'env' and 'query'.
# @POST: Returns a dictionary with count and results list. # @POST: Returns a dictionary with count and results list.
# @RETURN: Dict[str, Any] - Search results. # @RETURN: Dict[str, Any] - Search results.
async def execute(self, params: Dict[str, Any]) -> Dict[str, Any]: async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None) -> Dict[str, Any]:
with belief_scope("SearchPlugin.execute", f"params={params}"): with belief_scope("SearchPlugin.execute", f"params={params}"):
env_name = params.get("env") env_name = params.get("env")
search_query = params.get("query") search_query = params.get("query")
# Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
log.with_source("superset_api") if context else log
search_log = log.with_source("search") if context else log
if not env_name or not search_query: if not env_name or not search_query:
logger.error("[SearchPlugin.execute][State] Missing required parameters.") log.error("Missing required parameters: env, query")
raise ValueError("Missing required parameters: env, query") raise ValueError("Missing required parameters: env, query")
# Get config and initialize client # Get config and initialize client
@@ -118,20 +128,20 @@ class SearchPlugin(PluginBase):
config_manager = get_config_manager() config_manager = get_config_manager()
env_config = config_manager.get_environment(env_name) env_config = config_manager.get_environment(env_name)
if not env_config: if not env_config:
logger.error(f"[SearchPlugin.execute][State] Environment '{env_name}' not found.") log.error(f"Environment '{env_name}' not found in configuration.")
raise ValueError(f"Environment '{env_name}' not found in configuration.") raise ValueError(f"Environment '{env_name}' not found in configuration.")
client = SupersetClient(env_config) client = SupersetClient(env_config)
client.authenticate() client.authenticate()
logger.info(f"[SearchPlugin.execute][Action] Searching for pattern: '{search_query}' in environment: {env_name}") log.info(f"Searching for pattern: '{search_query}' in environment: {env_name}")
try: try:
# Ported logic from search_script.py # Ported logic from search_script.py
_, datasets = client.get_datasets(query={"columns": ["id", "table_name", "sql", "database", "columns"]}) _, datasets = client.get_datasets(query={"columns": ["id", "table_name", "sql", "database", "columns"]})
if not datasets: if not datasets:
logger.warning("[SearchPlugin.execute][State] No datasets found.") search_log.warning("No datasets found.")
return {"count": 0, "results": []} return {"count": 0, "results": []}
pattern = re.compile(search_query, re.IGNORECASE) pattern = re.compile(search_query, re.IGNORECASE)
@@ -155,17 +165,17 @@ class SearchPlugin(PluginBase):
"full_value": value_str "full_value": value_str
}) })
logger.info(f"[SearchPlugin.execute][Success] Found matches in {len(results)} locations.") search_log.info(f"Found matches in {len(results)} locations.")
return { return {
"count": len(results), "count": len(results),
"results": results "results": results
} }
except re.error as e: except re.error as e:
logger.error(f"[SearchPlugin.execute][Failure] Invalid regex pattern: {e}") search_log.error(f"Invalid regex pattern: {e}")
raise ValueError(f"Invalid regex pattern: {e}") raise ValueError(f"Invalid regex pattern: {e}")
except Exception as e: except Exception as e:
logger.error(f"[SearchPlugin.execute][Failure] Error during search: {e}") log.error(f"Error during search: {e}")
raise raise
# [/DEF:execute:Function] # [/DEF:execute:Function]

View File

@@ -5,6 +5,7 @@
# @LAYER: App # @LAYER: App
# @RELATION: IMPLEMENTS -> PluginBase # @RELATION: IMPLEMENTS -> PluginBase
# @RELATION: DEPENDS_ON -> backend.src.models.storage # @RELATION: DEPENDS_ON -> backend.src.models.storage
# @RELATION: USES -> TaskContext
# #
# @INVARIANT: All file operations must be restricted to the configured storage root. # @INVARIANT: All file operations must be restricted to the configured storage root.
@@ -18,8 +19,9 @@ from fastapi import UploadFile
from ...core.plugin_base import PluginBase from ...core.plugin_base import PluginBase
from ...core.logger import belief_scope, logger from ...core.logger import belief_scope, logger
from ...models.storage import StoredFile, FileCategory, StorageConfig from ...models.storage import StoredFile, FileCategory
from ...dependencies import get_config_manager from ...dependencies import get_config_manager
from ...core.task_manager.context import TaskContext
# [/SECTION] # [/SECTION]
# [DEF:StoragePlugin:Class] # [DEF:StoragePlugin:Class]
@@ -112,12 +114,21 @@ class StoragePlugin(PluginBase):
# [/DEF:get_schema:Function] # [/DEF:get_schema:Function]
# [DEF:execute:Function] # [DEF:execute:Function]
# @PURPOSE: Executes storage-related tasks (placeholder for PluginBase compliance). # @PURPOSE: Executes storage-related tasks with TaskContext support.
# @PARAM: params (Dict[str, Any]) - Storage parameters.
# @PARAM: context (Optional[TaskContext]) - Task context for logging with source attribution.
# @PRE: params must match the plugin schema. # @PRE: params must match the plugin schema.
# @POST: Task is executed and logged. # @POST: Task is executed and logged.
async def execute(self, params: Dict[str, Any]): async def execute(self, params: Dict[str, Any], context: Optional[TaskContext] = None):
with belief_scope("StoragePlugin:execute"): with belief_scope("StoragePlugin:execute"):
logger.info(f"[StoragePlugin][Action] Executing with params: {params}") # Use TaskContext logger if available, otherwise fall back to app logger
log = context.logger if context else logger
# Create sub-loggers for different components
storage_log = log.with_source("storage") if context else log
log.with_source("filesystem") if context else log
storage_log.info(f"Executing with params: {params}")
# [/DEF:execute:Function] # [/DEF:execute:Function]
# [DEF:get_storage_root:Function] # [DEF:get_storage_root:Function]

View File

@@ -10,7 +10,7 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from typing import List, Optional from typing import List, Optional
from pydantic import BaseModel, EmailStr, Field from pydantic import BaseModel, EmailStr
from datetime import datetime from datetime import datetime
# [/SECTION] # [/SECTION]

View File

@@ -1,5 +1,6 @@
# [DEF:backend.src.scripts.create_admin:Module] # [DEF:backend.src.scripts.create_admin:Module]
# #
# @TIER: STANDARD
# @SEMANTICS: admin, setup, user, auth, cli # @SEMANTICS: admin, setup, user, auth, cli
# @PURPOSE: CLI tool for creating the initial admin user. # @PURPOSE: CLI tool for creating the initial admin user.
# @LAYER: Scripts # @LAYER: Scripts
@@ -19,7 +20,7 @@ sys.path.append(str(Path(__file__).parent.parent.parent))
from src.core.database import AuthSessionLocal, init_db from src.core.database import AuthSessionLocal, init_db
from src.core.auth.security import get_password_hash from src.core.auth.security import get_password_hash
from src.models.auth import User, Role, Permission from src.models.auth import User, Role
from src.core.logger import logger, belief_scope from src.core.logger import logger, belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -9,13 +9,12 @@
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
import sys import sys
import os
from pathlib import Path from pathlib import Path
# Add src to path # Add src to path
sys.path.append(str(Path(__file__).parent.parent.parent)) sys.path.append(str(Path(__file__).parent.parent.parent))
from src.core.database import init_db, auth_engine from src.core.database import init_db
from src.core.logger import logger, belief_scope from src.core.logger import logger, belief_scope
from src.scripts.seed_permissions import seed_permissions from src.scripts.seed_permissions import seed_permissions
# [/SECTION] # [/SECTION]

View File

@@ -0,0 +1,18 @@
# [DEF:backend.src.services:Module]
# @TIER: STANDARD
# @SEMANTICS: services, package, init
# @PURPOSE: Package initialization for services module
# @LAYER: Core
# @RELATION: EXPORTS -> resource_service, mapping_service
# @NOTE: Only export services that don't cause circular imports
# @NOTE: GitService, AuthService, LLMProviderService have circular import issues - import directly when needed
# Only export services that don't cause circular imports
from .mapping_service import MappingService
from .resource_service import ResourceService
__all__ = [
'MappingService',
'ResourceService',
]
# [/DEF:backend.src.services:Module]

View File

@@ -10,11 +10,11 @@
# @INVARIANT: Authentication must verify both credentials and account status. # @INVARIANT: Authentication must verify both credentials and account status.
# [SECTION: IMPORTS] # [SECTION: IMPORTS]
from typing import Optional, Dict, Any, List from typing import Dict, Any
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from ..models.auth import User, Role from ..models.auth import User, Role
from ..core.auth.repository import AuthRepository from ..core.auth.repository import AuthRepository
from ..core.auth.security import verify_password, get_password_hash from ..core.auth.security import verify_password
from ..core.auth.jwt import create_access_token from ..core.auth.jwt import create_access_token
from ..core.logger import belief_scope from ..core.logger import belief_scope
# [/SECTION] # [/SECTION]

View File

@@ -10,11 +10,10 @@
# @INVARIANT: All Git operations must be performed on a valid local directory. # @INVARIANT: All Git operations must be performed on a valid local directory.
import os import os
import shutil
import httpx import httpx
from git import Repo, RemoteProgress from git import Repo
from fastapi import HTTPException from fastapi import HTTPException
from typing import List, Optional from typing import List
from datetime import datetime from datetime import datetime
from src.core.logger import logger, belief_scope from src.core.logger import logger, belief_scope
from src.models.git import GitProvider from src.models.git import GitProvider
@@ -167,7 +166,7 @@ class GitService:
# Handle empty repository case (no commits) # Handle empty repository case (no commits)
if not repo.heads and not repo.remotes: if not repo.heads and not repo.remotes:
logger.warning(f"[create_branch][Action] Repository is empty. Creating initial commit to enable branching.") logger.warning("[create_branch][Action] Repository is empty. Creating initial commit to enable branching.")
readme_path = os.path.join(repo.working_dir, "README.md") readme_path = os.path.join(repo.working_dir, "README.md")
if not os.path.exists(readme_path): if not os.path.exists(readme_path):
with open(readme_path, "w") as f: with open(readme_path, "w") as f:
@@ -178,7 +177,7 @@ class GitService:
# Verify source branch exists # Verify source branch exists
try: try:
repo.commit(from_branch) repo.commit(from_branch)
except: except Exception:
logger.warning(f"[create_branch][Action] Source branch {from_branch} not found, using HEAD") logger.warning(f"[create_branch][Action] Source branch {from_branch} not found, using HEAD")
from_branch = repo.head from_branch = repo.head

View File

@@ -0,0 +1,169 @@
# [DEF:backend.src.services.llm_provider:Module]
# @TIER: STANDARD
# @SEMANTICS: service, llm, provider, encryption
# @PURPOSE: Service for managing LLM provider configurations with encrypted API keys.
# @LAYER: Domain
# @RELATION: DEPENDS_ON -> backend.src.core.database
# @RELATION: DEPENDS_ON -> backend.src.models.llm
from typing import List, Optional
from sqlalchemy.orm import Session
from ..models.llm import LLMProvider
from ..plugins.llm_analysis.models import LLMProviderConfig
from ..core.logger import belief_scope, logger
from cryptography.fernet import Fernet
import os
# [DEF:EncryptionManager:Class]
# @TIER: CRITICAL
# @PURPOSE: Handles encryption and decryption of sensitive data like API keys.
# @INVARIANT: Uses a secret key from environment or a default one (fallback only for dev).
class EncryptionManager:
# [DEF:EncryptionManager.__init__:Function]
# @PURPOSE: Initialize the encryption manager with a Fernet key.
# @PRE: ENCRYPTION_KEY env var must be set or use default dev key.
# @POST: Fernet instance ready for encryption/decryption.
def __init__(self):
self.key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
self.fernet = Fernet(self.key)
# [/DEF:EncryptionManager.__init__:Function]
# [DEF:EncryptionManager.encrypt:Function]
# @PURPOSE: Encrypt a plaintext string.
# @PRE: data must be a non-empty string.
# @POST: Returns encrypted string.
def encrypt(self, data: str) -> str:
return self.fernet.encrypt(data.encode()).decode()
# [/DEF:EncryptionManager.encrypt:Function]
# [DEF:EncryptionManager.decrypt:Function]
# @PURPOSE: Decrypt an encrypted string.
# @PRE: encrypted_data must be a valid Fernet-encrypted string.
# @POST: Returns original plaintext string.
def decrypt(self, encrypted_data: str) -> str:
return self.fernet.decrypt(encrypted_data.encode()).decode()
# [/DEF:EncryptionManager.decrypt:Function]
# [/DEF:EncryptionManager:Class]
# [DEF:LLMProviderService:Class]
# @TIER: STANDARD
# @PURPOSE: Service to manage LLM provider lifecycle.
class LLMProviderService:
# [DEF:LLMProviderService.__init__:Function]
# @PURPOSE: Initialize the service with database session.
# @PRE: db must be a valid SQLAlchemy Session.
# @POST: Service ready for provider operations.
def __init__(self, db: Session):
self.db = db
self.encryption = EncryptionManager()
# [/DEF:LLMProviderService.__init__:Function]
# [DEF:get_all_providers:Function]
# @TIER: STANDARD
# @PURPOSE: Returns all configured LLM providers.
# @PRE: Database connection must be active.
# @POST: Returns list of all LLMProvider records.
def get_all_providers(self) -> List[LLMProvider]:
with belief_scope("get_all_providers"):
return self.db.query(LLMProvider).all()
# [/DEF:get_all_providers:Function]
# [DEF:get_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Returns a single LLM provider by ID.
# @PRE: provider_id must be a valid string.
# @POST: Returns LLMProvider or None if not found.
def get_provider(self, provider_id: str) -> Optional[LLMProvider]:
with belief_scope("get_provider"):
return self.db.query(LLMProvider).filter(LLMProvider.id == provider_id).first()
# [/DEF:get_provider:Function]
# [DEF:create_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Creates a new LLM provider with encrypted API key.
# @PRE: config must contain valid provider configuration.
# @POST: New provider created and persisted to database.
def create_provider(self, config: LLMProviderConfig) -> LLMProvider:
with belief_scope("create_provider"):
encrypted_key = self.encryption.encrypt(config.api_key)
db_provider = LLMProvider(
provider_type=config.provider_type.value,
name=config.name,
base_url=config.base_url,
api_key=encrypted_key,
default_model=config.default_model,
is_active=config.is_active
)
self.db.add(db_provider)
self.db.commit()
self.db.refresh(db_provider)
return db_provider
# [/DEF:create_provider:Function]
# [DEF:update_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Updates an existing LLM provider.
# @PRE: provider_id must exist, config must be valid.
# @POST: Provider updated and persisted to database.
def update_provider(self, provider_id: str, config: LLMProviderConfig) -> Optional[LLMProvider]:
with belief_scope("update_provider"):
db_provider = self.get_provider(provider_id)
if not db_provider:
return None
db_provider.provider_type = config.provider_type.value
db_provider.name = config.name
db_provider.base_url = config.base_url
# Only update API key if provided (not None and not empty)
if config.api_key is not None and config.api_key != "":
db_provider.api_key = self.encryption.encrypt(config.api_key)
db_provider.default_model = config.default_model
db_provider.is_active = config.is_active
self.db.commit()
self.db.refresh(db_provider)
return db_provider
# [/DEF:update_provider:Function]
# [DEF:delete_provider:Function]
# @TIER: STANDARD
# @PURPOSE: Deletes an LLM provider.
# @PRE: provider_id must exist.
# @POST: Provider removed from database.
def delete_provider(self, provider_id: str) -> bool:
with belief_scope("delete_provider"):
db_provider = self.get_provider(provider_id)
if not db_provider:
return False
self.db.delete(db_provider)
self.db.commit()
return True
# [/DEF:delete_provider:Function]
# [DEF:get_decrypted_api_key:Function]
# @TIER: STANDARD
# @PURPOSE: Returns the decrypted API key for a provider.
# @PRE: provider_id must exist with valid encrypted key.
# @POST: Returns decrypted API key or None on failure.
def get_decrypted_api_key(self, provider_id: str) -> Optional[str]:
with belief_scope("get_decrypted_api_key"):
db_provider = self.get_provider(provider_id)
if not db_provider:
logger.warning(f"[get_decrypted_api_key] Provider {provider_id} not found in database")
return None
logger.info(f"[get_decrypted_api_key] Provider found: {db_provider.id}")
logger.info(f"[get_decrypted_api_key] Encrypted API key length: {len(db_provider.api_key) if db_provider.api_key else 0}")
try:
decrypted_key = self.encryption.decrypt(db_provider.api_key)
logger.info(f"[get_decrypted_api_key] Decryption successful, key length: {len(decrypted_key) if decrypted_key else 0}")
return decrypted_key
except Exception as e:
logger.error(f"[get_decrypted_api_key] Decryption failed: {str(e)}")
return None
# [/DEF:get_decrypted_api_key:Function]
# [/DEF:LLMProviderService:Class]
# [/DEF:backend.src.services.llm_provider:Module]

View File

@@ -0,0 +1,251 @@
# [DEF:backend.src.services.resource_service:Module]
# @TIER: STANDARD
# @SEMANTICS: service, resources, dashboards, datasets, tasks, git
# @PURPOSE: Shared service for fetching resource data with Git status and task status
# @LAYER: Service
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
# @RELATION: DEPENDS_ON -> backend.src.core.task_manager
# @RELATION: DEPENDS_ON -> backend.src.services.git_service
# @INVARIANT: All resources include metadata about their current state
# [SECTION: IMPORTS]
from typing import List, Dict, Optional, Any
from ..core.superset_client import SupersetClient
from ..core.task_manager.models import Task
from ..services.git_service import GitService
from ..core.logger import logger, belief_scope
# [/SECTION]
# [DEF:ResourceService:Class]
# @PURPOSE: Provides centralized access to resource data with enhanced metadata
class ResourceService:
# [DEF:__init__:Function]
# @PURPOSE: Initialize the resource service with dependencies
# @PRE: None
# @POST: ResourceService is ready to fetch resources
def __init__(self):
with belief_scope("ResourceService.__init__"):
self.git_service = GitService()
logger.info("[ResourceService][Action] Initialized ResourceService")
# [/DEF:__init__:Function]
# [DEF:get_dashboards_with_status:Function]
# @PURPOSE: Fetch dashboards from environment with Git status and last task status
# @PRE: env is a valid Environment object
# @POST: Returns list of dashboards with enhanced metadata
# @PARAM: env (Environment) - The environment to fetch from
# @PARAM: tasks (List[Task]) - List of tasks to check for status
# @RETURN: List[Dict] - Dashboards with git_status and last_task fields
# @RELATION: CALLS -> SupersetClient.get_dashboards_summary
# @RELATION: CALLS -> self._get_git_status_for_dashboard
# @RELATION: CALLS -> self._get_last_task_for_resource
async def get_dashboards_with_status(
self,
env: Any,
tasks: Optional[List[Task]] = None
) -> List[Dict[str, Any]]:
with belief_scope("get_dashboards_with_status", f"env={env.id}"):
client = SupersetClient(env)
dashboards = client.get_dashboards_summary()
# Enhance each dashboard with Git status and task status
result = []
for dashboard in dashboards:
# dashboard is already a dict, no need to call .dict()
dashboard_dict = dashboard
dashboard_id = dashboard_dict.get('id')
# Get Git status if repo exists
git_status = self._get_git_status_for_dashboard(dashboard_id)
dashboard_dict['git_status'] = git_status
# Get last task status
last_task = self._get_last_task_for_resource(
f"dashboard-{dashboard_id}",
tasks
)
dashboard_dict['last_task'] = last_task
result.append(dashboard_dict)
logger.info(f"[ResourceService][Coherence:OK] Fetched {len(result)} dashboards with status")
return result
# [/DEF:get_dashboards_with_status:Function]
# [DEF:get_datasets_with_status:Function]
# @PURPOSE: Fetch datasets from environment with mapping progress and last task status
# @PRE: env is a valid Environment object
# @POST: Returns list of datasets with enhanced metadata
# @PARAM: env (Environment) - The environment to fetch from
# @PARAM: tasks (List[Task]) - List of tasks to check for status
# @RETURN: List[Dict] - Datasets with mapped_fields and last_task fields
# @RELATION: CALLS -> SupersetClient.get_datasets_summary
# @RELATION: CALLS -> self._get_last_task_for_resource
async def get_datasets_with_status(
self,
env: Any,
tasks: Optional[List[Task]] = None
) -> List[Dict[str, Any]]:
with belief_scope("get_datasets_with_status", f"env={env.id}"):
client = SupersetClient(env)
datasets = client.get_datasets_summary()
# Enhance each dataset with task status
result = []
for dataset in datasets:
# dataset is already a dict, no need to call .dict()
dataset_dict = dataset
dataset_id = dataset_dict.get('id')
# Get last task status
last_task = self._get_last_task_for_resource(
f"dataset-{dataset_id}",
tasks
)
dataset_dict['last_task'] = last_task
result.append(dataset_dict)
logger.info(f"[ResourceService][Coherence:OK] Fetched {len(result)} datasets with status")
return result
# [/DEF:get_datasets_with_status:Function]
# [DEF:get_activity_summary:Function]
# @PURPOSE: Get summary of active and recent tasks for the activity indicator
# @PRE: tasks is a list of Task objects
# @POST: Returns summary with active_count and recent_tasks
# @PARAM: tasks (List[Task]) - List of tasks to summarize
# @RETURN: Dict - Activity summary
def get_activity_summary(self, tasks: List[Task]) -> Dict[str, Any]:
with belief_scope("get_activity_summary"):
# Count active (RUNNING, WAITING_INPUT) tasks
active_tasks = [
t for t in tasks
if t.status in ['RUNNING', 'WAITING_INPUT']
]
# Get recent tasks (last 5)
recent_tasks = sorted(
tasks,
key=lambda t: t.created_at,
reverse=True
)[:5]
# Format recent tasks for frontend
recent_tasks_formatted = []
for task in recent_tasks:
resource_name = self._extract_resource_name_from_task(task)
recent_tasks_formatted.append({
'task_id': str(task.id),
'resource_name': resource_name,
'resource_type': self._extract_resource_type_from_task(task),
'status': task.status,
'started_at': task.created_at.isoformat() if task.created_at else None
})
return {
'active_count': len(active_tasks),
'recent_tasks': recent_tasks_formatted
}
# [/DEF:get_activity_summary:Function]
# [DEF:_get_git_status_for_dashboard:Function]
# @PURPOSE: Get Git sync status for a dashboard
# @PRE: dashboard_id is a valid integer
# @POST: Returns git status or None if no repo exists
# @PARAM: dashboard_id (int) - The dashboard ID
# @RETURN: Optional[Dict] - Git status with branch and sync_status
# @RELATION: CALLS -> GitService.get_repo
def _get_git_status_for_dashboard(self, dashboard_id: int) -> Optional[Dict[str, Any]]:
try:
repo = self.git_service.get_repo(dashboard_id)
if not repo:
return None
# Check if there are uncommitted changes
try:
# Get current branch
branch = repo.active_branch.name
# Check for uncommitted changes
is_dirty = repo.is_dirty()
# Check for unpushed commits
unpushed = len(list(repo.iter_commits(f'{branch}@{{u}}..{branch}'))) if '@{u}' in str(repo.refs) else 0
if is_dirty or unpushed > 0:
sync_status = 'DIFF'
else:
sync_status = 'OK'
return {
'branch': branch,
'sync_status': sync_status
}
except Exception:
logger.warning(f"[ResourceService][Warning] Failed to get git status for dashboard {dashboard_id}")
return None
except Exception:
# No repo exists for this dashboard
return None
# [/DEF:_get_git_status_for_dashboard:Function]
# [DEF:_get_last_task_for_resource:Function]
# @PURPOSE: Get the most recent task for a specific resource
# @PRE: resource_id is a valid string
# @POST: Returns task summary or None if no tasks found
# @PARAM: resource_id (str) - The resource identifier (e.g., "dashboard-123")
# @PARAM: tasks (Optional[List[Task]]) - List of tasks to search
# @RETURN: Optional[Dict] - Task summary with task_id and status
def _get_last_task_for_resource(
self,
resource_id: str,
tasks: Optional[List[Task]] = None
) -> Optional[Dict[str, Any]]:
if not tasks:
return None
# Filter tasks for this resource
resource_tasks = []
for task in tasks:
params = task.params or {}
if params.get('resource_id') == resource_id:
resource_tasks.append(task)
if not resource_tasks:
return None
# Get most recent task
last_task = max(resource_tasks, key=lambda t: t.created_at)
return {
'task_id': str(last_task.id),
'status': last_task.status
}
# [/DEF:_get_last_task_for_resource:Function]
# [DEF:_extract_resource_name_from_task:Function]
# @PURPOSE: Extract resource name from task params
# @PRE: task is a valid Task object
# @POST: Returns resource name or task ID
# @PARAM: task (Task) - The task to extract from
# @RETURN: str - Resource name or fallback
def _extract_resource_name_from_task(self, task: Task) -> str:
params = task.params or {}
return params.get('resource_name', f"Task {task.id}")
# [/DEF:_extract_resource_name_from_task:Function]
# [DEF:_extract_resource_type_from_task:Function]
# @PURPOSE: Extract resource type from task params
# @PRE: task is a valid Task object
# @POST: Returns resource type or 'unknown'
# @PARAM: task (Task) - The task to extract from
# @RETURN: str - Resource type
def _extract_resource_type_from_task(self, task: Task) -> str:
params = task.params or {}
return params.get('resource_type', 'unknown')
# [/DEF:_extract_resource_type_from_task:Function]
# [/DEF:ResourceService:Class]
# [/DEF:backend.src.services.resource_service:Module]

Binary file not shown.

View File

@@ -0,0 +1,76 @@
#!/usr/bin/env python3
"""Debug script to test Superset API authentication"""
from pprint import pprint
from src.core.superset_client import SupersetClient
from src.core.config_manager import ConfigManager
def main():
print("Debugging Superset API authentication...")
config = ConfigManager()
# Select first available environment
environments = config.get_environments()
if not environments:
print("No environments configured")
return
env = environments[0]
print(f"\nTesting environment: {env.name}")
print(f"URL: {env.url}")
try:
# Test API client authentication
print("\n--- Testing API Authentication ---")
client = SupersetClient(env)
tokens = client.authenticate()
print("\nAPI Auth Success!")
print(f"Access Token: {tokens.get('access_token', 'N/A')}")
print(f"CSRF Token: {tokens.get('csrf_token', 'N/A')}")
# Debug cookies from session
print("\n--- Session Cookies ---")
for cookie in client.network.session.cookies:
print(f"{cookie.name}={cookie.value}")
# Test accessing UI via requests
print("\n--- Testing UI Access ---")
ui_url = env.url.rstrip('/').replace('/api/v1', '')
print(f"UI URL: {ui_url}")
# Try to access UI home page
ui_response = client.network.session.get(ui_url, timeout=30, allow_redirects=True)
print(f"Status Code: {ui_response.status_code}")
print(f"URL: {ui_response.url}")
# Check response headers
print("\n--- Response Headers ---")
pprint(dict(ui_response.headers))
print("\n--- Response Content Preview (200 chars) ---")
print(repr(ui_response.text[:200]))
if ui_response.status_code == 200:
print("\nUI Access: Success")
# Try to access a dashboard
# For testing, just use the home page
print("\n--- Checking if login is required ---")
if "login" in ui_response.url.lower() or "login" in ui_response.text.lower():
print("❌ Not logged in to UI")
else:
print("✅ Logged in to UI")
except Exception as e:
print(f"\n❌ Error: {type(e).__name__}: {e}")
import traceback
print("\nStack Trace:")
print(traceback.format_exc())
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,44 @@
#!/usr/bin/env python3
"""Test script to debug API key decryption issue."""
from src.core.database import SessionLocal
from src.models.llm import LLMProvider
from cryptography.fernet import Fernet
import os
# Get the encryption key
key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
print(f"Encryption key (first 20 chars): {key[:20]}")
print(f"Encryption key length: {len(key)}")
# Create Fernet instance
fernet = Fernet(key)
# Get provider from database
db = SessionLocal()
provider = db.query(LLMProvider).filter(LLMProvider.id == '6c899741-4108-4196-aea4-f38ad2f0150e').first()
if provider:
print("\nProvider found:")
print(f" ID: {provider.id}")
print(f" Name: {provider.name}")
print(f" Encrypted API Key (first 50 chars): {provider.api_key[:50]}")
print(f" Encrypted API Key Length: {len(provider.api_key)}")
# Test decryption
print("\nAttempting decryption...")
try:
decrypted = fernet.decrypt(provider.api_key.encode()).decode()
print("Decryption successful!")
print(f" Decrypted key length: {len(decrypted)}")
print(f" Decrypted key (first 8 chars): {decrypted[:8]}")
print(f" Decrypted key is empty: {len(decrypted) == 0}")
except Exception as e:
print(f"Decryption failed with error: {e}")
print(f"Error type: {type(e).__name__}")
import traceback
traceback.print_exc()
else:
print("Provider not found")
db.close()

View File

@@ -0,0 +1 @@
[{"key[": 20, ")\n\n# Create Fernet instance\nfernet = Fernet(key)\n\n# Test encrypting an empty string\nempty_encrypted = fernet.encrypt(b\"": ".", "print(f": "nEncrypted empty string: {empty_encrypted"}, {"test-api-key-12345\"\ntest_encrypted = fernet.encrypt(test_key.encode()).decode()\nprint(f": "nEncrypted test key: {test_encrypted"}, {"gAAAAABphhwSZie0OwXjJ78Fk-c4Uo6doNJXipX49AX7Bypzp4ohiRX3hXPXKb45R1vhNUOqbm6Ke3-eRwu_KdWMZ9chFBKmqw==\"\nprint(f": "nStored encrypted key: {stored_key"}, {"len(stored_key)}": "Check if stored key matches empty string encryption\nif stored_key == empty_encrypted:\n print(", "string!": "else:\n print(", "print(f": "mpty string encryption: {empty_encrypted"}, {"stored_key}": "Try to decrypt the stored key\ntry:\n decrypted = fernet.decrypt(stored_key.encode()).decode()\n print(f", "print(f": "ecrypted key length: {len(decrypted)"}, {")\nexcept Exception as e:\n print(f": "nDecryption failed with error: {e"}]

View File

@@ -1,5 +1,4 @@
import sys import sys
import os
from pathlib import Path from pathlib import Path
# Add src to path # Add src to path
@@ -8,7 +7,7 @@ sys.path.append(str(Path(__file__).parent.parent / "src"))
import pytest import pytest
from sqlalchemy import create_engine from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker from sqlalchemy.orm import sessionmaker
from src.core.database import Base, get_auth_db from src.core.database import Base
from src.models.auth import User, Role, Permission, ADGroupMapping from src.models.auth import User, Role, Permission, ADGroupMapping
from src.services.auth_service import AuthService from src.services.auth_service import AuthService
from src.core.auth.repository import AuthRepository from src.core.auth.repository import AuthRepository

View File

@@ -0,0 +1,67 @@
# [DEF:backend.tests.test_dashboards_api:Module]
# @TIER: STANDARD
# @PURPOSE: Contract-driven tests for Dashboard Hub API
# @RELATION: TESTS -> backend.src.api.routes.dashboards
from fastapi.testclient import TestClient
from unittest.mock import MagicMock, patch
from src.app import app
from src.api.routes.dashboards import DashboardsResponse
client = TestClient(app)
# [DEF:test_get_dashboards_success:Function]
# @TEST: GET /api/dashboards returns 200 and valid schema
# @PRE: env_id exists
# @POST: Response matches DashboardsResponse schema
def test_get_dashboards_success():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.get_resource_service") as mock_service, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
# Mock environment
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
# Mock resource service response
mock_service.return_value.get_dashboards_with_status.return_value = [
{
"id": 1,
"title": "Sales Report",
"slug": "sales",
"git_status": {"branch": "main", "sync_status": "OK"},
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
}
]
# Mock permission
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards?env_id=prod")
assert response.status_code == 200
data = response.json()
assert "dashboards" in data
assert len(data["dashboards"]) == 1
assert data["dashboards"][0]["title"] == "Sales Report"
# Validate against Pydantic model
DashboardsResponse(**data)
# [DEF:test_get_dashboards_env_not_found:Function]
# @TEST: GET /api/dashboards returns 404 if env_id missing
# @PRE: env_id does not exist
# @POST: Returns 404 error
def test_get_dashboards_env_not_found():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
mock_config.return_value.get_environments.return_value = []
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards?env_id=nonexistent")
assert response.status_code == 404
assert "Environment not found" in response.json()["detail"]
# [/DEF:backend.tests.test_dashboards_api:Module]

View File

@@ -0,0 +1,395 @@
# [DEF:test_log_persistence:Module]
# @SEMANTICS: test, log, persistence, unit_test
# @PURPOSE: Unit tests for TaskLogPersistenceService.
# @LAYER: Test
# @RELATION: TESTS -> TaskLogPersistenceService
# @TIER: STANDARD
# [SECTION: IMPORTS]
from datetime import datetime
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.core.task_manager.persistence import TaskLogPersistenceService
from src.core.task_manager.models import LogEntry
# [/SECTION]
# [DEF:TestLogPersistence:Class]
# @PURPOSE: Test suite for TaskLogPersistenceService.
# @TIER: STANDARD
class TestLogPersistence:
# [DEF:setup_class:Function]
# @PURPOSE: Setup test database and service instance.
# @PRE: None.
# @POST: In-memory database and service instance created.
@classmethod
def setup_class(cls):
"""Create an in-memory database for testing."""
cls.engine = create_engine("sqlite:///:memory:")
cls.SessionLocal = sessionmaker(bind=cls.engine)
cls.service = TaskLogPersistenceService(cls.engine)
# [/DEF:setup_class:Function]
# [DEF:teardown_class:Function]
# @PURPOSE: Clean up test database.
# @PRE: None.
# @POST: Database disposed.
@classmethod
def teardown_class(cls):
"""Dispose of the database engine."""
cls.engine.dispose()
# [/DEF:teardown_class:Function]
# [DEF:setup_method:Function]
# @PURPOSE: Setup for each test method.
# @PRE: None.
# @POST: Fresh database session created.
def setup_method(self):
"""Create a new session for each test."""
self.session = self.SessionLocal()
# [/DEF:setup_method:Function]
# [DEF:teardown_method:Function]
# @PURPOSE: Cleanup after each test method.
# @PRE: None.
# @POST: Session closed and rolled back.
def teardown_method(self):
"""Close the session after each test."""
self.session.close()
# [/DEF:teardown_method:Function]
# [DEF:test_add_log_single:Function]
# @PURPOSE: Test adding a single log entry.
# @PRE: Service and session initialized.
# @POST: Log entry persisted to database.
def test_add_log_single(self):
"""Test adding a single log entry."""
entry = LogEntry(
task_id="test-task-1",
timestamp=datetime.now(),
level="INFO",
source="test_source",
message="Test message"
)
self.service.add_log(entry)
# Query the database
result = self.session.query(LogEntry).filter_by(task_id="test-task-1").first()
assert result is not None
assert result.level == "INFO"
assert result.source == "test_source"
assert result.message == "Test message"
# [/DEF:test_add_log_single:Function]
# [DEF:test_add_log_batch:Function]
# @PURPOSE: Test adding multiple log entries in batch.
# @PRE: Service and session initialized.
# @POST: All log entries persisted to database.
def test_add_log_batch(self):
"""Test adding multiple log entries in batch."""
entries = [
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="INFO",
source="source1",
message="Message 1"
),
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="WARNING",
source="source2",
message="Message 2"
),
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="ERROR",
source="source3",
message="Message 3"
),
]
self.service.add_logs(entries)
# Query the database
results = self.session.query(LogEntry).filter_by(task_id="test-task-2").all()
assert len(results) == 3
assert results[0].level == "INFO"
assert results[1].level == "WARNING"
assert results[2].level == "ERROR"
# [/DEF:test_add_log_batch:Function]
# [DEF:test_get_logs_by_task_id:Function]
# @PURPOSE: Test retrieving logs by task ID.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns logs for the specified task.
def test_get_logs_by_task_id(self):
"""Test retrieving logs by task ID."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-3",
timestamp=datetime.now(),
level="INFO",
source="source1",
message=f"Message {i}"
)
for i in range(5)
]
self.service.add_logs(entries)
# Retrieve logs
logs = self.service.get_logs("test-task-3")
assert len(logs) == 5
assert all(log.task_id == "test-task-3" for log in logs)
# [/DEF:test_get_logs_by_task_id:Function]
# [DEF:test_get_logs_with_filters:Function]
# @PURPOSE: Test retrieving logs with level and source filters.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns filtered logs.
def test_get_logs_with_filters(self):
"""Test retrieving logs with level and source filters."""
# Add test logs with different levels and sources
entries = [
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info message"
),
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="WARNING",
source="api",
message="Warning message"
),
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="ERROR",
source="storage",
message="Error message"
),
]
self.service.add_logs(entries)
# Test level filter
warning_logs = self.service.get_logs("test-task-4", level="WARNING")
assert len(warning_logs) == 1
assert warning_logs[0].level == "WARNING"
# Test source filter
api_logs = self.service.get_logs("test-task-4", source="api")
assert len(api_logs) == 2
assert all(log.source == "api" for log in api_logs)
# Test combined filters
api_warning_logs = self.service.get_logs("test-task-4", level="WARNING", source="api")
assert len(api_warning_logs) == 1
# [/DEF:test_get_logs_with_filters:Function]
# [DEF:test_get_logs_with_pagination:Function]
# @PURPOSE: Test retrieving logs with pagination.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns paginated logs.
def test_get_logs_with_pagination(self):
"""Test retrieving logs with pagination."""
# Add 15 test logs
entries = [
LogEntry(
task_id="test-task-5",
timestamp=datetime.now(),
level="INFO",
source="test",
message=f"Message {i}"
)
for i in range(15)
]
self.service.add_logs(entries)
# Test first page
page1 = self.service.get_logs("test-task-5", limit=10, offset=0)
assert len(page1) == 10
# Test second page
page2 = self.service.get_logs("test-task-5", limit=10, offset=10)
assert len(page2) == 5
# [/DEF:test_get_logs_with_pagination:Function]
# [DEF:test_get_logs_with_search:Function]
# @PURPOSE: Test retrieving logs with search query.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns logs matching search query.
def test_get_logs_with_search(self):
"""Test retrieving logs with search query."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="INFO",
source="api",
message="User authentication successful"
),
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="ERROR",
source="api",
message="Failed to connect to database"
),
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="INFO",
source="storage",
message="File saved successfully"
),
]
self.service.add_logs(entries)
# Test search for "authentication"
auth_logs = self.service.get_logs("test-task-6", search="authentication")
assert len(auth_logs) == 1
assert "authentication" in auth_logs[0].message.lower()
# Test search for "failed"
failed_logs = self.service.get_logs("test-task-6", search="failed")
assert len(failed_logs) == 1
assert "failed" in failed_logs[0].message.lower()
# [/DEF:test_get_logs_with_search:Function]
# [DEF:test_get_log_stats:Function]
# @PURPOSE: Test retrieving log statistics.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns statistics grouped by level and source.
def test_get_log_stats(self):
"""Test retrieving log statistics."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info 1"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info 2"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="WARNING",
source="api",
message="Warning 1"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="ERROR",
source="storage",
message="Error 1"
),
]
self.service.add_logs(entries)
# Get stats
stats = self.service.get_log_stats("test-task-7")
assert stats is not None
assert stats["by_level"]["INFO"] == 2
assert stats["by_level"]["WARNING"] == 1
assert stats["by_level"]["ERROR"] == 1
assert stats["by_source"]["api"] == 3
assert stats["by_source"]["storage"] == 1
# [/DEF:test_get_log_stats:Function]
# [DEF:test_get_log_sources:Function]
# @PURPOSE: Test retrieving unique log sources.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns list of unique sources.
def test_get_log_sources(self):
"""Test retrieving unique log sources."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Message 1"
),
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="storage",
message="Message 2"
),
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="git",
message="Message 3"
),
]
self.service.add_logs(entries)
# Get sources
sources = self.service.get_log_sources("test-task-8")
assert len(sources) == 3
assert "api" in sources
assert "storage" in sources
assert "git" in sources
# [/DEF:test_get_log_sources:Function]
# [DEF:test_delete_logs_by_task_id:Function]
# @PURPOSE: Test deleting logs by task ID.
# @PRE: Service and session initialized, logs exist.
# @POST: Logs for the task are deleted.
def test_delete_logs_by_task_id(self):
"""Test deleting logs by task ID."""
# Add test logs
entries = [
LogEntry(
task_id="test-task-9",
timestamp=datetime.now(),
level="INFO",
source="test",
message=f"Message {i}"
)
for i in range(3)
]
self.service.add_logs(entries)
# Verify logs exist
logs_before = self.service.get_logs("test-task-9")
assert len(logs_before) == 3
# Delete logs
self.service.delete_logs("test-task-9")
# Verify logs are deleted
logs_after = self.service.get_logs("test-task-9")
assert len(logs_after) == 0
# [/DEF:test_delete_logs_by_task_id:Function]
# [/DEF:TestLogPersistence:Class]
# [/DEF:test_log_persistence:Module]

View File

@@ -1,14 +1,29 @@
import pytest import pytest
from src.core.logger import belief_scope, logger from src.core.logger import (
belief_scope,
logger,
configure_logger,
get_task_log_level,
should_log_task_level
)
from src.core.config_models import LoggingConfig
# [DEF:test_belief_scope_logs_entry_action_exit:Function] # [DEF:test_belief_scope_logs_entry_action_exit_at_debug:Function]
# @PURPOSE: Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs. # @PURPOSE: Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs at DEBUG level.
# @PRE: belief_scope is available. caplog fixture is used. # @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Entry, Action, and Exit tags. # @POST: Logs are verified to contain Entry, Action, and Exit tags at DEBUG level.
def test_belief_scope_logs_entry_action_exit(caplog): def test_belief_scope_logs_entry_action_exit_at_debug(caplog):
"""Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs.""" """Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs at DEBUG level."""
caplog.set_level("INFO") # Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("TestFunction"): with belief_scope("TestFunction"):
logger.info("Doing something important") logger.info("Doing something important")
@@ -19,16 +34,28 @@ def test_belief_scope_logs_entry_action_exit(caplog):
assert any("[TestFunction][Entry]" in msg for msg in log_messages), "Entry log not found" assert any("[TestFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
assert any("[TestFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found" assert any("[TestFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found"
assert any("[TestFunction][Exit]" in msg for msg in log_messages), "Exit log not found" assert any("[TestFunction][Exit]" in msg for msg in log_messages), "Exit log not found"
# [/DEF:test_belief_scope_logs_entry_action_exit:Function]
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_logs_entry_action_exit_at_debug:Function]
# [DEF:test_belief_scope_error_handling:Function] # [DEF:test_belief_scope_error_handling:Function]
# @PURPOSE: Test that belief_scope logs Coherence:Failed on exception. # @PURPOSE: Test that belief_scope logs Coherence:Failed on exception.
# @PRE: belief_scope is available. caplog fixture is used. # @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Coherence:Failed tag. # @POST: Logs are verified to contain Coherence:Failed tag.
def test_belief_scope_error_handling(caplog): def test_belief_scope_error_handling(caplog):
"""Test that belief_scope logs Coherence:Failed on exception.""" """Test that belief_scope logs Coherence:Failed on exception."""
caplog.set_level("INFO") # Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with pytest.raises(ValueError): with pytest.raises(ValueError):
with belief_scope("FailingFunction"): with belief_scope("FailingFunction"):
@@ -39,16 +66,28 @@ def test_belief_scope_error_handling(caplog):
assert any("[FailingFunction][Entry]" in msg for msg in log_messages), "Entry log not found" assert any("[FailingFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
assert any("[FailingFunction][Coherence:Failed]" in msg for msg in log_messages), "Failed coherence log not found" assert any("[FailingFunction][Coherence:Failed]" in msg for msg in log_messages), "Failed coherence log not found"
# Exit should not be logged on failure # Exit should not be logged on failure
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_error_handling:Function] # [/DEF:test_belief_scope_error_handling:Function]
# [DEF:test_belief_scope_success_coherence:Function] # [DEF:test_belief_scope_success_coherence:Function]
# @PURPOSE: Test that belief_scope logs Coherence:OK on success. # @PURPOSE: Test that belief_scope logs Coherence:OK on success.
# @PRE: belief_scope is available. caplog fixture is used. # @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
# @POST: Logs are verified to contain Coherence:OK tag. # @POST: Logs are verified to contain Coherence:OK tag.
def test_belief_scope_success_coherence(caplog): def test_belief_scope_success_coherence(caplog):
"""Test that belief_scope logs Coherence:OK on success.""" """Test that belief_scope logs Coherence:OK on success."""
caplog.set_level("INFO") # Configure logger to DEBUG level
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("SuccessFunction"): with belief_scope("SuccessFunction"):
pass pass
@@ -56,4 +95,119 @@ def test_belief_scope_success_coherence(caplog):
log_messages = [record.message for record in caplog.records] log_messages = [record.message for record in caplog.records]
assert any("[SuccessFunction][Coherence:OK]" in msg for msg in log_messages), "Success coherence log not found" assert any("[SuccessFunction][Coherence:OK]" in msg for msg in log_messages), "Success coherence log not found"
# [/DEF:test_belief_scope_success_coherence:Function]
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_success_coherence:Function]
# [DEF:test_belief_scope_not_visible_at_info:Function]
# @PURPOSE: Test that belief_scope Entry/Exit/Coherence logs are NOT visible at INFO level.
# @PRE: belief_scope is available. caplog fixture is used.
# @POST: Entry/Exit/Coherence logs are not captured at INFO level.
def test_belief_scope_not_visible_at_info(caplog):
"""Test that belief_scope Entry/Exit/Coherence logs are NOT visible at INFO level."""
caplog.set_level("INFO")
with belief_scope("InfoLevelFunction"):
logger.info("Doing something important")
log_messages = [record.message for record in caplog.records]
# Action log should be visible
assert any("[InfoLevelFunction][Action] Doing something important" in msg for msg in log_messages), "Action log not found"
# Entry/Exit/Coherence should NOT be visible at INFO level
assert not any("[InfoLevelFunction][Entry]" in msg for msg in log_messages), "Entry log should not be visible at INFO"
assert not any("[InfoLevelFunction][Exit]" in msg for msg in log_messages), "Exit log should not be visible at INFO"
assert not any("[InfoLevelFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence log should not be visible at INFO"
# [/DEF:test_belief_scope_not_visible_at_info:Function]
# [DEF:test_task_log_level_default:Function]
# @PURPOSE: Test that default task log level is INFO.
# @PRE: None.
# @POST: Default level is INFO.
def test_task_log_level_default():
"""Test that default task log level is INFO."""
level = get_task_log_level()
assert level == "INFO"
# [/DEF:test_task_log_level_default:Function]
# [DEF:test_should_log_task_level:Function]
# @PURPOSE: Test that should_log_task_level correctly filters log levels.
# @PRE: None.
# @POST: Filtering works correctly for all level combinations.
def test_should_log_task_level():
"""Test that should_log_task_level correctly filters log levels."""
# Default level is INFO
assert should_log_task_level("ERROR") is True, "ERROR should be logged at INFO threshold"
assert should_log_task_level("WARNING") is True, "WARNING should be logged at INFO threshold"
assert should_log_task_level("INFO") is True, "INFO should be logged at INFO threshold"
assert should_log_task_level("DEBUG") is False, "DEBUG should NOT be logged at INFO threshold"
# [/DEF:test_should_log_task_level:Function]
# [DEF:test_configure_logger_task_log_level:Function]
# @PURPOSE: Test that configure_logger updates task_log_level.
# @PRE: LoggingConfig is available.
# @POST: task_log_level is updated correctly.
def test_configure_logger_task_log_level():
"""Test that configure_logger updates task_log_level."""
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
assert get_task_log_level() == "DEBUG", "task_log_level should be DEBUG"
assert should_log_task_level("DEBUG") is True, "DEBUG should be logged at DEBUG threshold"
# Reset to INFO
config = LoggingConfig(
level="INFO",
task_log_level="INFO",
enable_belief_state=True
)
configure_logger(config)
assert get_task_log_level() == "INFO", "task_log_level should be reset to INFO"
# [/DEF:test_configure_logger_task_log_level:Function]
# [DEF:test_enable_belief_state_flag:Function]
# @PURPOSE: Test that enable_belief_state flag controls belief_scope logging.
# @PRE: LoggingConfig is available. caplog fixture is used.
# @POST: belief_scope logs are controlled by the flag.
def test_enable_belief_state_flag(caplog):
"""Test that enable_belief_state flag controls belief_scope logging."""
# Disable belief state
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=False
)
configure_logger(config)
caplog.set_level("DEBUG")
with belief_scope("DisabledFunction"):
logger.info("Doing something")
log_messages = [record.message for record in caplog.records]
# Entry and Exit should NOT be logged when disabled
assert not any("[DisabledFunction][Entry]" in msg for msg in log_messages), "Entry should not be logged when disabled"
assert not any("[DisabledFunction][Exit]" in msg for msg in log_messages), "Exit should not be logged when disabled"
# Coherence:OK should still be logged (internal tracking)
assert any("[DisabledFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence should still be logged"
# Re-enable for other tests
config = LoggingConfig(
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=True
)
configure_logger(config)
# [/DEF:test_enable_belief_state_flag:Function]

View File

@@ -1,4 +1,3 @@
import pytest
from src.core.config_models import Environment from src.core.config_models import Environment
from src.core.logger import belief_scope from src.core.logger import belief_scope

View File

@@ -0,0 +1,123 @@
import pytest
from fastapi.testclient import TestClient
from unittest.mock import MagicMock
from src.app import app
from src.dependencies import get_config_manager, get_task_manager, get_resource_service, has_permission
client = TestClient(app)
# [DEF:test_dashboards_api:Test]
# @PURPOSE: Verify GET /api/dashboards contract compliance
# @TEST: Valid env_id returns 200 and dashboard list
# @TEST: Invalid env_id returns 404
# @TEST: Search filter works
@pytest.fixture
def mock_deps():
config_manager = MagicMock()
task_manager = MagicMock()
resource_service = MagicMock()
# Mock environment
env = MagicMock()
env.id = "env1"
config_manager.get_environments.return_value = [env]
# Mock tasks
task_manager.get_all_tasks.return_value = []
# Mock dashboards
resource_service.get_dashboards_with_status.return_value = [
{"id": 1, "title": "Sales", "slug": "sales", "git_status": {"branch": "main", "sync_status": "OK"}, "last_task": None},
{"id": 2, "title": "Marketing", "slug": "mkt", "git_status": None, "last_task": {"task_id": "t1", "status": "SUCCESS"}}
]
app.dependency_overrides[get_config_manager] = lambda: config_manager
app.dependency_overrides[get_task_manager] = lambda: task_manager
app.dependency_overrides[get_resource_service] = lambda: resource_service
# Bypass permission check
mock_user = MagicMock()
mock_user.username = "testadmin"
# Override both get_current_user and has_permission
from src.dependencies import get_current_user
app.dependency_overrides[get_current_user] = lambda: mock_user
# We need to override the specific instance returned by has_permission
app.dependency_overrides[has_permission("plugin:migration", "READ")] = lambda: mock_user
yield {
"config": config_manager,
"task": task_manager,
"resource": resource_service
}
app.dependency_overrides.clear()
def test_get_dashboards_success(mock_deps):
response = client.get("/api/dashboards?env_id=env1")
assert response.status_code == 200
data = response.json()
assert "dashboards" in data
assert len(data["dashboards"]) == 2
assert data["dashboards"][0]["title"] == "Sales"
assert data["dashboards"][0]["git_status"]["sync_status"] == "OK"
def test_get_dashboards_not_found(mock_deps):
response = client.get("/api/dashboards?env_id=invalid")
assert response.status_code == 404
def test_get_dashboards_search(mock_deps):
response = client.get("/api/dashboards?env_id=env1&search=Sales")
assert response.status_code == 200
data = response.json()
assert len(data["dashboards"]) == 1
assert data["dashboards"][0]["title"] == "Sales"
# [/DEF:test_dashboards_api:Test]
# [DEF:test_datasets_api:Test]
# @PURPOSE: Verify GET /api/datasets contract compliance
# @TEST: Valid env_id returns 200 and dataset list
# @TEST: Invalid env_id returns 404
# @TEST: Search filter works
# @TEST: Negative - Service failure returns 503
def test_get_datasets_success(mock_deps):
mock_deps["resource"].get_datasets_with_status.return_value = [
{"id": 1, "table_name": "orders", "schema": "public", "database": "db1", "mapped_fields": {"total": 10, "mapped": 5}, "last_task": None}
]
response = client.get("/api/datasets?env_id=env1")
assert response.status_code == 200
data = response.json()
assert "datasets" in data
assert len(data["datasets"]) == 1
assert data["datasets"][0]["table_name"] == "orders"
assert data["datasets"][0]["mapped_fields"]["mapped"] == 5
def test_get_datasets_not_found(mock_deps):
response = client.get("/api/datasets?env_id=invalid")
assert response.status_code == 404
def test_get_datasets_search(mock_deps):
mock_deps["resource"].get_datasets_with_status.return_value = [
{"id": 1, "table_name": "orders", "schema": "public", "database": "db1", "mapped_fields": {"total": 10, "mapped": 5}, "last_task": None},
{"id": 2, "table_name": "users", "schema": "public", "database": "db1", "mapped_fields": {"total": 5, "mapped": 5}, "last_task": None}
]
response = client.get("/api/datasets?env_id=env1&search=orders")
assert response.status_code == 200
data = response.json()
assert len(data["datasets"]) == 1
assert data["datasets"][0]["table_name"] == "orders"
def test_get_datasets_service_failure(mock_deps):
mock_deps["resource"].get_datasets_with_status.side_effect = Exception("Superset down")
response = client.get("/api/datasets?env_id=env1")
assert response.status_code == 503
assert "Failed to fetch datasets" in response.json()["detail"]
# [/DEF:test_datasets_api:Test]

View File

@@ -0,0 +1,47 @@
# [DEF:backend.tests.test_resource_service:Module]
# @TIER: STANDARD
# @PURPOSE: Contract-driven tests for ResourceService
# @RELATION: TESTS -> backend.src.services.resource_service
import pytest
from unittest.mock import MagicMock, patch
from src.services.resource_service import ResourceService
@pytest.mark.asyncio
async def test_get_dashboards_with_status():
# [DEF:test_get_dashboards_with_status:Function]
# @TEST: ResourceService correctly enhances dashboard data
# @PRE: SupersetClient returns raw dashboards
# @POST: Returned dicts contain git_status and last_task
with patch("src.services.resource_service.SupersetClient") as mock_client, \
patch("src.services.resource_service.GitService") as mock_git:
service = ResourceService()
# Mock Superset response
mock_client.return_value.get_dashboards_summary.return_value = [
{"id": 1, "title": "Test Dashboard", "slug": "test"}
]
# Mock Git status
mock_git.return_value.get_repo.return_value = None # No repo
# Mock tasks
mock_task = MagicMock()
mock_task.id = "task-123"
mock_task.status = "RUNNING"
mock_task.params = {"resource_id": "dashboard-1"}
env = MagicMock()
env.id = "prod"
result = await service.get_dashboards_with_status(env, [mock_task])
assert len(result) == 1
assert result[0]["id"] == 1
assert "git_status" in result[0]
assert result[0]["last_task"]["task_id"] == "task-123"
assert result[0]["last_task"]["status"] == "RUNNING"
# [/DEF:backend.tests.test_resource_service:Module]

View File

@@ -0,0 +1,374 @@
# [DEF:test_task_logger:Module]
# @SEMANTICS: test, task_logger, task_context, unit_test
# @PURPOSE: Unit tests for TaskLogger and TaskContext.
# @LAYER: Test
# @RELATION: TESTS -> TaskLogger, TaskContext
# @TIER: STANDARD
# [SECTION: IMPORTS]
from unittest.mock import Mock
from src.core.task_manager.task_logger import TaskLogger
from src.core.task_manager.context import TaskContext
# [/SECTION]
# [DEF:TestTaskLogger:Class]
# @PURPOSE: Test suite for TaskLogger.
# @TIER: STANDARD
class TestTaskLogger:
# [DEF:setup_method:Function]
# @PURPOSE: Setup for each test method.
# @PRE: None.
# @POST: Mock add_log_fn created.
def setup_method(self):
"""Create a mock add_log function for testing."""
self.mock_add_log = Mock()
self.logger = TaskLogger(
task_id="test-task-1",
add_log_fn=self.mock_add_log,
source="test_source"
)
# [/DEF:setup_method:Function]
# [DEF:test_init:Function]
# @PURPOSE: Test TaskLogger initialization.
# @PRE: None.
# @POST: Logger instance created with correct attributes.
def test_init(self):
"""Test TaskLogger initialization."""
assert self.logger._task_id == "test-task-1"
assert self.logger._default_source == "test_source"
assert self.logger._add_log == self.mock_add_log
# [/DEF:test_init:Function]
# [DEF:test_with_source:Function]
# @PURPOSE: Test creating a sub-logger with different source.
# @PRE: Logger initialized.
# @POST: New logger created with different source but same task_id.
def test_with_source(self):
"""Test creating a sub-logger with different source."""
sub_logger = self.logger.with_source("new_source")
assert sub_logger._task_id == "test-task-1"
assert sub_logger._default_source == "new_source"
assert sub_logger._add_log == self.mock_add_log
# [/DEF:test_with_source:Function]
# [DEF:test_debug:Function]
# @PURPOSE: Test debug log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with DEBUG level.
def test_debug(self):
"""Test debug logging."""
self.logger.debug("Debug message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="DEBUG",
message="Debug message",
source="test_source",
metadata=None
)
# [/DEF:test_debug:Function]
# [DEF:test_info:Function]
# @PURPOSE: Test info log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with INFO level.
def test_info(self):
"""Test info logging."""
self.logger.info("Info message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="INFO",
message="Info message",
source="test_source",
metadata=None
)
# [/DEF:test_info:Function]
# [DEF:test_warning:Function]
# @PURPOSE: Test warning log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with WARNING level.
def test_warning(self):
"""Test warning logging."""
self.logger.warning("Warning message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="WARNING",
message="Warning message",
source="test_source",
metadata=None
)
# [/DEF:test_warning:Function]
# [DEF:test_error:Function]
# @PURPOSE: Test error log level.
# @PRE: Logger initialized.
# @POST: add_log_fn called with ERROR level.
def test_error(self):
"""Test error logging."""
self.logger.error("Error message")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="ERROR",
message="Error message",
source="test_source",
metadata=None
)
# [/DEF:test_error:Function]
# [DEF:test_error_with_metadata:Function]
# @PURPOSE: Test error logging with metadata.
# @PRE: Logger initialized.
# @POST: add_log_fn called with ERROR level and metadata.
def test_error_with_metadata(self):
"""Test error logging with metadata."""
metadata = {"error_code": 500, "details": "Connection failed"}
self.logger.error("Error message", metadata=metadata)
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="ERROR",
message="Error message",
source="test_source",
metadata=metadata
)
# [/DEF:test_error_with_metadata:Function]
# [DEF:test_progress:Function]
# @PURPOSE: Test progress logging.
# @PRE: Logger initialized.
# @POST: add_log_fn called with INFO level and progress metadata.
def test_progress(self):
"""Test progress logging."""
self.logger.progress("Processing items", percent=50)
expected_metadata = {"progress": 50}
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="INFO",
message="Processing items",
source="test_source",
metadata=expected_metadata
)
# [/DEF:test_progress:Function]
# [DEF:test_progress_clamping:Function]
# @PURPOSE: Test progress value clamping (0-100).
# @PRE: Logger initialized.
# @POST: Progress values clamped to 0-100 range.
def test_progress_clamping(self):
"""Test progress value clamping."""
# Test below 0
self.logger.progress("Below 0", percent=-10)
call1 = self.mock_add_log.call_args_list[0]
assert call1.kwargs["metadata"]["progress"] == 0
self.mock_add_log.reset_mock()
# Test above 100
self.logger.progress("Above 100", percent=150)
call2 = self.mock_add_log.call_args_list[0]
assert call2.kwargs["metadata"]["progress"] == 100
# [/DEF:test_progress_clamping:Function]
# [DEF:test_source_override:Function]
# @PURPOSE: Test overriding the default source.
# @PRE: Logger initialized.
# @POST: add_log_fn called with overridden source.
def test_source_override(self):
"""Test overriding the default source."""
self.logger.info("Message", source="override_source")
self.mock_add_log.assert_called_once_with(
task_id="test-task-1",
level="INFO",
message="Message",
source="override_source",
metadata=None
)
# [/DEF:test_source_override:Function]
# [DEF:test_sub_logger_source_independence:Function]
# @PURPOSE: Test sub-logger independence from parent.
# @PRE: Logger and sub-logger initialized.
# @POST: Sub-logger has different source, parent unchanged.
def test_sub_logger_source_independence(self):
"""Test sub-logger source independence from parent."""
sub_logger = self.logger.with_source("sub_source")
# Log with parent
self.logger.info("Parent message")
# Log with sub-logger
sub_logger.info("Sub message")
# Verify both calls were made with correct sources
calls = self.mock_add_log.call_args_list
assert len(calls) == 2
assert calls[0].kwargs["source"] == "test_source"
assert calls[1].kwargs["source"] == "sub_source"
# [/DEF:test_sub_logger_source_independence:Function]
# [/DEF:TestTaskLogger:Class]
# [DEF:TestTaskContext:Class]
# @PURPOSE: Test suite for TaskContext.
# @TIER: STANDARD
class TestTaskContext:
# [DEF:setup_method:Function]
# @PURPOSE: Setup for each test method.
# @PRE: None.
# @POST: Mock add_log_fn created.
def setup_method(self):
"""Create a mock add_log function for testing."""
self.mock_add_log = Mock()
self.params = {"param1": "value1", "param2": "value2"}
self.context = TaskContext(
task_id="test-task-2",
add_log_fn=self.mock_add_log,
params=self.params,
default_source="plugin"
)
# [/DEF:setup_method:Function]
# [DEF:test_init:Function]
# @PURPOSE: Test TaskContext initialization.
# @PRE: None.
# @POST: Context instance created with correct attributes.
def test_init(self):
"""Test TaskContext initialization."""
assert self.context._task_id == "test-task-2"
assert self.context._params == self.params
assert isinstance(self.context._logger, TaskLogger)
assert self.context._logger._default_source == "plugin"
# [/DEF:test_init:Function]
# [DEF:test_task_id_property:Function]
# @PURPOSE: Test task_id property.
# @PRE: Context initialized.
# @POST: Returns correct task_id.
def test_task_id_property(self):
"""Test task_id property."""
assert self.context.task_id == "test-task-2"
# [/DEF:test_task_id_property:Function]
# [DEF:test_logger_property:Function]
# @PURPOSE: Test logger property.
# @PRE: Context initialized.
# @POST: Returns TaskLogger instance.
def test_logger_property(self):
"""Test logger property."""
logger = self.context.logger
assert isinstance(logger, TaskLogger)
assert logger._task_id == "test-task-2"
assert logger._default_source == "plugin"
# [/DEF:test_logger_property:Function]
# [DEF:test_params_property:Function]
# @PURPOSE: Test params property.
# @PRE: Context initialized.
# @POST: Returns correct params dict.
def test_params_property(self):
"""Test params property."""
assert self.context.params == self.params
# [/DEF:test_params_property:Function]
# [DEF:test_get_param:Function]
# @PURPOSE: Test getting a specific parameter.
# @PRE: Context initialized with params.
# @POST: Returns parameter value or default.
def test_get_param(self):
"""Test getting a specific parameter."""
assert self.context.get_param("param1") == "value1"
assert self.context.get_param("param2") == "value2"
assert self.context.get_param("nonexistent") is None
assert self.context.get_param("nonexistent", "default") == "default"
# [/DEF:test_get_param:Function]
# [DEF:test_create_sub_context:Function]
# @PURPOSE: Test creating a sub-context with different source.
# @PRE: Context initialized.
# @POST: New context created with different logger source.
def test_create_sub_context(self):
"""Test creating a sub-context with different source."""
sub_context = self.context.create_sub_context("new_source")
assert sub_context._task_id == "test-task-2"
assert sub_context._params == self.params
assert sub_context._logger._default_source == "new_source"
assert sub_context._logger._task_id == "test-task-2"
# [/DEF:test_create_sub_context:Function]
# [DEF:test_context_logger_delegates_to_task_logger:Function]
# @PURPOSE: Test context logger delegates to TaskLogger.
# @PRE: Context initialized.
# @POST: Logger calls are delegated to TaskLogger.
def test_context_logger_delegates_to_task_logger(self):
"""Test context logger delegates to TaskLogger."""
# Call through context
self.context.logger.info("Test message")
# Verify the mock was called
self.mock_add_log.assert_called_once_with(
task_id="test-task-2",
level="INFO",
message="Test message",
source="plugin",
metadata=None
)
# [/DEF:test_context_logger_delegates_to_task_logger:Function]
# [DEF:test_sub_context_with_source:Function]
# @PURPOSE: Test sub-context logger uses new source.
# @PRE: Context initialized.
# @POST: Sub-context logger uses new source.
def test_sub_context_with_source(self):
"""Test sub-context logger uses new source."""
sub_context = self.context.create_sub_context("api_source")
# Log through sub-context
sub_context.logger.info("API message")
# Verify the mock was called with new source
self.mock_add_log.assert_called_once_with(
task_id="test-task-2",
level="INFO",
message="API message",
source="api_source",
metadata=None
)
# [/DEF:test_sub_context_with_source:Function]
# [DEF:test_multiple_sub_contexts:Function]
# @PURPOSE: Test creating multiple sub-contexts.
# @PRE: Context initialized.
# @POST: Each sub-context has independent logger source.
def test_multiple_sub_contexts(self):
"""Test creating multiple sub-contexts."""
sub1 = self.context.create_sub_context("source1")
sub2 = self.context.create_sub_context("source2")
sub3 = self.context.create_sub_context("source3")
assert sub1._logger._default_source == "source1"
assert sub2._logger._default_source == "source2"
assert sub3._logger._default_source == "source3"
# All should have same task_id and params
assert sub1._task_id == "test-task-2"
assert sub2._task_id == "test-task-2"
assert sub3._task_id == "test-task-2"
assert sub1._params == self.params
assert sub2._params == self.params
assert sub3._params == self.params
# [/DEF:test_multiple_sub_contexts:Function]
# [/DEF:TestTaskContext:Class]
# [/DEF:test_task_logger:Module]

Some files were not shown because too many files have changed in this diff Show More