Compare commits
1 Commits
af74841765
...
019-supers
| Author | SHA1 | Date | |
|---|---|---|---|
| 0cf0ef25f1 |
1250
.ai/MODULE_MAP.md
Normal file
1250
.ai/MODULE_MAP.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -3,7 +3,7 @@
|
|||||||
> Compressed view for AI Context. Generated automatically.
|
> Compressed view for AI Context. Generated automatically.
|
||||||
|
|
||||||
- 📦 **generate_semantic_map** (`Module`) `[CRITICAL]`
|
- 📦 **generate_semantic_map** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Scans the codebase to generate a Semantic Map and Compliance Report based on the System Standard.
|
- 📝 Scans the codebase to generate a Semantic Map, Module Map, and Compliance Report based on the System Standard.
|
||||||
- 🏗️ Layer: DevOps/Tooling
|
- 🏗️ Layer: DevOps/Tooling
|
||||||
- 🔒 Invariant: All DEF anchors must have matching closing anchors; TIER determines validation strictness.
|
- 🔒 Invariant: All DEF anchors must have matching closing anchors; TIER determines validation strictness.
|
||||||
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
||||||
@@ -71,35 +71,50 @@
|
|||||||
- 📝 Generates the token-optimized project map with enhanced Svelte details.
|
- 📝 Generates the token-optimized project map with enhanced Svelte details.
|
||||||
- ƒ **_write_entity_md** (`Function`) `[CRITICAL]`
|
- ƒ **_write_entity_md** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Recursive helper to write entity tree to Markdown with tier badges and enhanced details.
|
- 📝 Recursive helper to write entity tree to Markdown with tier badges and enhanced details.
|
||||||
|
- ƒ **_generate_module_map** (`Function`) `[CRITICAL]`
|
||||||
|
- 📝 Generates a module-centric map grouping entities by directory structure.
|
||||||
|
- ƒ **_get_module_path** (`Function`)
|
||||||
|
- 📝 Extracts the module path from a file path.
|
||||||
|
- ƒ **_collect_all_entities** (`Function`)
|
||||||
|
- 📝 Flattens entity tree for easier grouping.
|
||||||
- ƒ **to_dict** (`Function`) `[TRIVIAL]`
|
- ƒ **to_dict** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
|
- 📦 **TransactionCore** (`Module`) `[CRITICAL]`
|
||||||
|
- 📝 Core banking transaction processor with ACID guarantees.
|
||||||
|
- 🏗️ Layer: Domain (Core)
|
||||||
|
- 🔒 Invariant: Negative transfers are strictly forbidden.
|
||||||
|
- 🔗 DEPENDS_ON -> `[DEF:Infra:PostgresDB]`
|
||||||
|
- 🔗 DEPENDS_ON -> `[DEF:Infra:AuditLog]`
|
||||||
|
- ƒ **execute_transfer** (`Function`)
|
||||||
|
- 📝 Atomically move funds between accounts with audit trails.
|
||||||
|
- 🔗 CALLS -> `atomic_transaction`
|
||||||
- 📦 **PluginExampleShot** (`Module`)
|
- 📦 **PluginExampleShot** (`Module`)
|
||||||
- 📝 Reference implementation of a plugin following GRACE standards.
|
- 📝 Reference implementation of a plugin following GRACE standards.
|
||||||
- 🔗 IMPLEMENTS -> `[DEF:Std:Plugin]`
|
- 🏗️ Layer: Domain (Business Logic)
|
||||||
|
- 🔒 Invariant: get_schema must return valid JSON Schema.
|
||||||
|
- 🔗 INHERITS -> `PluginBase`
|
||||||
- ƒ **get_schema** (`Function`)
|
- ƒ **get_schema** (`Function`)
|
||||||
- 📝 Defines input validation schema.
|
- 📝 Defines input validation schema.
|
||||||
- ƒ **execute** (`Function`)
|
- ƒ **execute** (`Function`)
|
||||||
- 📝 Core plugin logic with structured logging and progress reporting.
|
- 📝 Core plugin logic with structured logging and scope isolation.
|
||||||
- ƒ **id** (`Function`) `[TRIVIAL]`
|
- ƒ **id** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- ƒ **name** (`Function`) `[TRIVIAL]`
|
|
||||||
- 📝 Auto-detected function (orphan)
|
|
||||||
- ƒ **description** (`Function`) `[TRIVIAL]`
|
|
||||||
- 📝 Auto-detected function (orphan)
|
|
||||||
- ƒ **version** (`Function`) `[TRIVIAL]`
|
|
||||||
- 📝 Auto-detected function (orphan)
|
|
||||||
- 📦 **BackendRouteShot** (`Module`)
|
- 📦 **BackendRouteShot** (`Module`)
|
||||||
- 📝 Reference implementation of a task-based route using GRACE-Poly.
|
- 📝 Reference implementation of a task-based route using GRACE-Poly.
|
||||||
|
- 🏗️ Layer: Interface (API)
|
||||||
|
- 🔒 Invariant: TaskManager must be available in dependency graph.
|
||||||
- 🔗 IMPLEMENTS -> `[DEF:Std:API_FastAPI]`
|
- 🔗 IMPLEMENTS -> `[DEF:Std:API_FastAPI]`
|
||||||
- ƒ **create_task** (`Function`)
|
- ƒ **create_task** (`Function`)
|
||||||
- 📝 Create and start a new task using TaskManager. Non-blocking.
|
- 📝 Create and start a new task using TaskManager. Non-blocking.
|
||||||
- 🔗 CALLS -> `task_manager.create_task`
|
- 🔗 CALLS -> `task_manager.create_task`
|
||||||
- 🧩 **FrontendComponentShot** (`Component`)
|
- 🧩 **FrontendComponentShot** (`Component`) `[CRITICAL]`
|
||||||
- 📝 Reference implementation of a task-spawning component using
|
- 📝 Action button to spawn a new task with full UX feedback cycle.
|
||||||
- 🏗️ Layer: UI
|
- 🏗️ Layer: UI (Presentation)
|
||||||
|
- 🔒 Invariant: Must prevent double-submission while loading.
|
||||||
- 📥 Props: plugin_id: any, params: any
|
- 📥 Props: plugin_id: any, params: any
|
||||||
- ⬅️ READS_FROM `lib`
|
- ⬅️ READS_FROM `lib`
|
||||||
- ⬅️ READS_FROM `t`
|
- ⬅️ READS_FROM `t`
|
||||||
|
- ƒ **spawnTask** (`Function`)
|
||||||
- 📦 **DashboardTypes** (`Module`) `[TRIVIAL]`
|
- 📦 **DashboardTypes** (`Module`) `[TRIVIAL]`
|
||||||
- 📝 TypeScript interfaces for Dashboard entities
|
- 📝 TypeScript interfaces for Dashboard entities
|
||||||
- 🏗️ Layer: Domain
|
- 🏗️ Layer: Domain
|
||||||
@@ -746,6 +761,7 @@
|
|||||||
- 🏗️ Layer: UI
|
- 🏗️ Layer: UI
|
||||||
- ⚡ Events: cancel, resume
|
- ⚡ Events: cancel, resume
|
||||||
- ➡️ WRITES_TO `props`
|
- ➡️ WRITES_TO `props`
|
||||||
|
- ➡️ WRITES_TO `state`
|
||||||
- ⬅️ READS_FROM `effect`
|
- ⬅️ READS_FROM `effect`
|
||||||
- ƒ **handleSubmit** (`Function`)
|
- ƒ **handleSubmit** (`Function`)
|
||||||
- 📝 Validates and dispatches the passwords to resume the task.
|
- 📝 Validates and dispatches the passwords to resume the task.
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ Use these for code generation (Style Transfer).
|
|||||||
* Ref: `.ai/shots/critical_module.py` -> `[DEF:Shot:Critical_Module]`
|
* Ref: `.ai/shots/critical_module.py` -> `[DEF:Shot:Critical_Module]`
|
||||||
|
|
||||||
## 3. DOMAIN MAP (Modules)
|
## 3. DOMAIN MAP (Modules)
|
||||||
|
* **Module Map:** `.ai/MODULE_MAP.md` -> `[DEF:Module_Map]`
|
||||||
* **Project Map:** `.ai/PROJECT_MAP.md` -> `[DEF:Project_Map]`
|
* **Project Map:** `.ai/PROJECT_MAP.md` -> `[DEF:Project_Map]`
|
||||||
* **Backend Core:** `backend/src/core` -> `[DEF:Module:Backend_Core]`
|
* **Backend Core:** `backend/src/core` -> `[DEF:Module:Backend_Core]`
|
||||||
* **Backend API:** `backend/src/api` -> `[DEF:Module:Backend_API]`
|
* **Backend API:** `backend/src/api` -> `[DEF:Module:Backend_API]`
|
||||||
|
|||||||
27
.dockerignore
Normal file
27
.dockerignore
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
.git
|
||||||
|
.gitignore
|
||||||
|
.pytest_cache
|
||||||
|
.ruff_cache
|
||||||
|
.vscode
|
||||||
|
.ai
|
||||||
|
.specify
|
||||||
|
.kilocode
|
||||||
|
venv
|
||||||
|
backend/.venv
|
||||||
|
backend/.pytest_cache
|
||||||
|
frontend/node_modules
|
||||||
|
frontend/.svelte-kit
|
||||||
|
frontend/.vite
|
||||||
|
frontend/build
|
||||||
|
backend/__pycache__
|
||||||
|
backend/src/__pycache__
|
||||||
|
backend/tests/__pycache__
|
||||||
|
**/__pycache__
|
||||||
|
*.pyc
|
||||||
|
*.pyo
|
||||||
|
*.pyd
|
||||||
|
*.db
|
||||||
|
*.log
|
||||||
|
backups
|
||||||
|
semantics
|
||||||
|
specs
|
||||||
36
Dockerfile
Normal file
36
Dockerfile
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
# Stage 1: Build frontend static assets
|
||||||
|
FROM node:20-alpine AS frontend-build
|
||||||
|
WORKDIR /app/frontend
|
||||||
|
|
||||||
|
COPY frontend/package*.json ./
|
||||||
|
RUN npm ci
|
||||||
|
|
||||||
|
COPY frontend/ ./
|
||||||
|
RUN npm run build
|
||||||
|
|
||||||
|
|
||||||
|
# Stage 2: Runtime image for backend + static frontend
|
||||||
|
FROM python:3.11-slim AS runtime
|
||||||
|
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
ENV BACKEND_PORT=8000
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
curl \
|
||||||
|
git \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
COPY backend/requirements.txt /app/backend/requirements.txt
|
||||||
|
RUN pip install --no-cache-dir -r /app/backend/requirements.txt
|
||||||
|
|
||||||
|
COPY backend/ /app/backend/
|
||||||
|
COPY --from=frontend-build /app/frontend/build /app/frontend/build
|
||||||
|
|
||||||
|
WORKDIR /app/backend
|
||||||
|
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
CMD ["python", "-m", "uvicorn", "src.app:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||||
69
README.md
69
README.md
@@ -32,7 +32,7 @@
|
|||||||
## Технологический стек
|
## Технологический стек
|
||||||
- **Backend**: Python 3.9+, FastAPI, SQLAlchemy, APScheduler, Pydantic.
|
- **Backend**: Python 3.9+, FastAPI, SQLAlchemy, APScheduler, Pydantic.
|
||||||
- **Frontend**: Node.js 18+, SvelteKit, Tailwind CSS.
|
- **Frontend**: Node.js 18+, SvelteKit, Tailwind CSS.
|
||||||
- **Database**: SQLite (для хранения метаданных, задач и настроек доступа).
|
- **Database**: PostgreSQL (для хранения метаданных, задач, логов и конфигурации).
|
||||||
|
|
||||||
## Структура проекта
|
## Структура проекта
|
||||||
- `backend/` — Серверная часть, API и логика плагинов.
|
- `backend/` — Серверная часть, API и логика плагинов.
|
||||||
@@ -58,11 +58,15 @@
|
|||||||
- `--skip-install`: Пропустить установку зависимостей.
|
- `--skip-install`: Пропустить установку зависимостей.
|
||||||
- `--help`: Показать справку.
|
- `--help`: Показать справку.
|
||||||
|
|
||||||
Переменные окружения:
|
Переменные окружения:
|
||||||
- `BACKEND_PORT`: Порт API (по умолчанию 8000).
|
- `BACKEND_PORT`: Порт API (по умолчанию 8000).
|
||||||
- `FRONTEND_PORT`: Порт UI (по умолчанию 5173).
|
- `FRONTEND_PORT`: Порт UI (по умолчанию 5173).
|
||||||
|
- `POSTGRES_URL`: Базовый URL PostgreSQL по умолчанию для всех подсистем.
|
||||||
|
- `DATABASE_URL`: URL основной БД (если не задан, используется `POSTGRES_URL`).
|
||||||
|
- `TASKS_DATABASE_URL`: URL БД задач/логов (если не задан, используется `DATABASE_URL`).
|
||||||
|
- `AUTH_DATABASE_URL`: URL БД авторизации (если не задан, используется PostgreSQL дефолт).
|
||||||
|
|
||||||
## Разработка
|
## Разработка
|
||||||
Проект следует строгим правилам разработки:
|
Проект следует строгим правилам разработки:
|
||||||
1. **Semantic Code Generation**: Использование протокола `.ai/standards/semantics.md` для обеспечения надежности кода.
|
1. **Semantic Code Generation**: Использование протокола `.ai/standards/semantics.md` для обеспечения надежности кода.
|
||||||
2. **Design by Contract (DbC)**: Определение предусловий и постусловий для ключевых функций.
|
2. **Design by Contract (DbC)**: Определение предусловий и постусловий для ключевых функций.
|
||||||
@@ -71,7 +75,54 @@
|
|||||||
### Полезные команды
|
### Полезные команды
|
||||||
- **Backend**: `cd backend && .venv/bin/python3 -m uvicorn src.app:app --reload`
|
- **Backend**: `cd backend && .venv/bin/python3 -m uvicorn src.app:app --reload`
|
||||||
- **Frontend**: `cd frontend && npm run dev`
|
- **Frontend**: `cd frontend && npm run dev`
|
||||||
- **Тесты**: `cd backend && .venv/bin/pytest`
|
- **Тесты**: `cd backend && .venv/bin/pytest`
|
||||||
|
|
||||||
## Контакты и вклад
|
## Docker и CI/CD
|
||||||
Для добавления новых функций или исправления ошибок, пожалуйста, ознакомьтесь с `docs/plugin_dev.md` и создайте соответствующую спецификацию в `specs/`.
|
### Локальный запуск в Docker (приложение + PostgreSQL)
|
||||||
|
```bash
|
||||||
|
docker compose up --build
|
||||||
|
```
|
||||||
|
|
||||||
|
После старта:
|
||||||
|
- UI/API: `http://localhost:8000`
|
||||||
|
- PostgreSQL: `localhost:5432` (`postgres/postgres`, DB `ss_tools`)
|
||||||
|
|
||||||
|
Остановить:
|
||||||
|
```bash
|
||||||
|
docker compose down
|
||||||
|
```
|
||||||
|
|
||||||
|
Полная очистка тома БД:
|
||||||
|
```bash
|
||||||
|
docker compose down -v
|
||||||
|
```
|
||||||
|
|
||||||
|
Если `postgres:16-alpine` не тянется из Docker Hub (TLS timeout), используйте fallback image:
|
||||||
|
```bash
|
||||||
|
POSTGRES_IMAGE=mirror.gcr.io/library/postgres:16-alpine docker compose up -d db
|
||||||
|
```
|
||||||
|
или:
|
||||||
|
```bash
|
||||||
|
POSTGRES_IMAGE=bitnami/postgresql:latest docker compose up -d db
|
||||||
|
```
|
||||||
|
Если на хосте уже занят `5432`, поднимайте Postgres на другом порту:
|
||||||
|
```bash
|
||||||
|
POSTGRES_HOST_PORT=5433 docker compose up -d db
|
||||||
|
```
|
||||||
|
|
||||||
|
### Миграция legacy-данных в PostgreSQL
|
||||||
|
Если нужно перенести старые данные из `tasks.db`/`config.json`:
|
||||||
|
```bash
|
||||||
|
cd backend
|
||||||
|
PYTHONPATH=. .venv/bin/python src/scripts/migrate_sqlite_to_postgres.py --sqlite-path tasks.db
|
||||||
|
```
|
||||||
|
|
||||||
|
### CI/CD
|
||||||
|
Добавлен workflow: `.github/workflows/ci-cd.yml`
|
||||||
|
- backend smoke tests
|
||||||
|
- frontend build
|
||||||
|
- docker build
|
||||||
|
- push образа в GHCR на `main/master`
|
||||||
|
|
||||||
|
## Контакты и вклад
|
||||||
|
Для добавления новых функций или исправления ошибок, пожалуйста, ознакомьтесь с `docs/plugin_dev.md` и создайте соответствующую спецификацию в `specs/`.
|
||||||
|
|||||||
@@ -24,7 +24,10 @@ class AuthConfig(BaseSettings):
|
|||||||
REFRESH_TOKEN_EXPIRE_DAYS: int = 7
|
REFRESH_TOKEN_EXPIRE_DAYS: int = 7
|
||||||
|
|
||||||
# Database Settings
|
# Database Settings
|
||||||
AUTH_DATABASE_URL: str = Field(default="sqlite:///./backend/auth.db", env="AUTH_DATABASE_URL")
|
AUTH_DATABASE_URL: str = Field(
|
||||||
|
default="postgresql+psycopg2://postgres:postgres@localhost:5432/ss_tools",
|
||||||
|
env="AUTH_DATABASE_URL",
|
||||||
|
)
|
||||||
|
|
||||||
# ADFS Settings
|
# ADFS Settings
|
||||||
ADFS_CLIENT_ID: str = Field(default="", env="ADFS_CLIENT_ID")
|
ADFS_CLIENT_ID: str = Field(default="", env="ADFS_CLIENT_ID")
|
||||||
@@ -41,4 +44,4 @@ class AuthConfig(BaseSettings):
|
|||||||
auth_config = AuthConfig()
|
auth_config = AuthConfig()
|
||||||
# [/DEF:auth_config:Variable]
|
# [/DEF:auth_config:Variable]
|
||||||
|
|
||||||
# [/DEF:backend.src.core.auth.config:Module]
|
# [/DEF:backend.src.core.auth.config:Module]
|
||||||
|
|||||||
567
backend/src/core/config_manager.py
Executable file → Normal file
567
backend/src/core/config_manager.py
Executable file → Normal file
@@ -1,284 +1,283 @@
|
|||||||
# [DEF:ConfigManagerModule:Module]
|
# [DEF:ConfigManagerModule:Module]
|
||||||
#
|
#
|
||||||
# @SEMANTICS: config, manager, persistence, json
|
# @SEMANTICS: config, manager, persistence, postgresql
|
||||||
# @PURPOSE: Manages application configuration, including loading/saving to JSON and CRUD for environments.
|
# @PURPOSE: Manages application configuration persisted in database with one-time migration from JSON.
|
||||||
# @LAYER: Core
|
# @LAYER: Core
|
||||||
# @RELATION: DEPENDS_ON -> ConfigModels
|
# @RELATION: DEPENDS_ON -> ConfigModels
|
||||||
# @RELATION: CALLS -> logger
|
# @RELATION: DEPENDS_ON -> AppConfigRecord
|
||||||
# @RELATION: WRITES_TO -> config.json
|
# @RELATION: CALLS -> logger
|
||||||
#
|
#
|
||||||
# @INVARIANT: Configuration must always be valid according to AppConfig model.
|
# @INVARIANT: Configuration must always be valid according to AppConfig model.
|
||||||
# @PUBLIC_API: ConfigManager
|
# @PUBLIC_API: ConfigManager
|
||||||
|
|
||||||
# [SECTION: IMPORTS]
|
# [SECTION: IMPORTS]
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional, List
|
from typing import Optional, List
|
||||||
from .config_models import AppConfig, Environment, GlobalSettings, StorageConfig
|
|
||||||
from .logger import logger, configure_logger, belief_scope
|
from sqlalchemy.orm import Session
|
||||||
# [/SECTION]
|
|
||||||
|
from .config_models import AppConfig, Environment, GlobalSettings, StorageConfig
|
||||||
# [DEF:ConfigManager:Class]
|
from .database import SessionLocal
|
||||||
# @PURPOSE: A class to handle application configuration persistence and management.
|
from ..models.config import AppConfigRecord
|
||||||
# @RELATION: WRITES_TO -> config.json
|
from .logger import logger, configure_logger, belief_scope
|
||||||
class ConfigManager:
|
# [/SECTION]
|
||||||
|
|
||||||
# [DEF:__init__:Function]
|
|
||||||
# @PURPOSE: Initializes the ConfigManager.
|
# [DEF:ConfigManager:Class]
|
||||||
# @PRE: isinstance(config_path, str) and len(config_path) > 0
|
# @PURPOSE: A class to handle application configuration persistence and management.
|
||||||
# @POST: self.config is an instance of AppConfig
|
class ConfigManager:
|
||||||
# @PARAM: config_path (str) - Path to the configuration file.
|
# [DEF:__init__:Function]
|
||||||
def __init__(self, config_path: str = "config.json"):
|
# @PURPOSE: Initializes the ConfigManager.
|
||||||
with belief_scope("__init__"):
|
# @PRE: isinstance(config_path, str) and len(config_path) > 0
|
||||||
# 1. Runtime check of @PRE
|
# @POST: self.config is an instance of AppConfig
|
||||||
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
|
# @PARAM: config_path (str) - Path to legacy JSON config (used only for initial migration fallback).
|
||||||
|
def __init__(self, config_path: str = "config.json"):
|
||||||
logger.info(f"[ConfigManager][Entry] Initializing with {config_path}")
|
with belief_scope("__init__"):
|
||||||
|
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
|
||||||
# 2. Logic implementation
|
|
||||||
self.config_path = Path(config_path)
|
logger.info(f"[ConfigManager][Entry] Initializing with legacy path {config_path}")
|
||||||
self.config: AppConfig = self._load_config()
|
|
||||||
|
self.config_path = Path(config_path)
|
||||||
# Configure logger with loaded settings
|
self.config: AppConfig = self._load_config()
|
||||||
configure_logger(self.config.settings.logging)
|
|
||||||
|
configure_logger(self.config.settings.logging)
|
||||||
# 3. Runtime check of @POST
|
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
|
||||||
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
|
|
||||||
|
logger.info("[ConfigManager][Exit] Initialized")
|
||||||
logger.info("[ConfigManager][Exit] Initialized")
|
# [/DEF:__init__:Function]
|
||||||
# [/DEF:__init__:Function]
|
|
||||||
|
# [DEF:_default_config:Function]
|
||||||
# [DEF:_load_config:Function]
|
# @PURPOSE: Returns default application configuration.
|
||||||
# @PURPOSE: Loads the configuration from disk or creates a default one.
|
# @RETURN: AppConfig - Default configuration.
|
||||||
# @PRE: self.config_path is set.
|
def _default_config(self) -> AppConfig:
|
||||||
# @POST: isinstance(return, AppConfig)
|
return AppConfig(
|
||||||
# @RETURN: AppConfig - The loaded or default configuration.
|
environments=[],
|
||||||
def _load_config(self) -> AppConfig:
|
settings=GlobalSettings(storage=StorageConfig()),
|
||||||
with belief_scope("_load_config"):
|
)
|
||||||
logger.debug(f"[_load_config][Entry] Loading from {self.config_path}")
|
# [/DEF:_default_config:Function]
|
||||||
|
|
||||||
if not self.config_path.exists():
|
# [DEF:_load_from_legacy_file:Function]
|
||||||
logger.info("[_load_config][Action] Config file not found. Creating default.")
|
# @PURPOSE: Loads legacy configuration from config.json for migration fallback.
|
||||||
default_config = AppConfig(
|
# @RETURN: AppConfig - Loaded or default configuration.
|
||||||
environments=[],
|
def _load_from_legacy_file(self) -> AppConfig:
|
||||||
settings=GlobalSettings()
|
with belief_scope("_load_from_legacy_file"):
|
||||||
)
|
if not self.config_path.exists():
|
||||||
self._save_config_to_disk(default_config)
|
logger.info("[_load_from_legacy_file][Action] Legacy config file not found, using defaults")
|
||||||
return default_config
|
return self._default_config()
|
||||||
try:
|
|
||||||
with open(self.config_path, "r") as f:
|
try:
|
||||||
data = json.load(f)
|
with open(self.config_path, "r", encoding="utf-8") as f:
|
||||||
|
data = json.load(f)
|
||||||
# Check for deprecated field
|
logger.info("[_load_from_legacy_file][Coherence:OK] Legacy configuration loaded")
|
||||||
if "settings" in data and "backup_path" in data["settings"]:
|
return AppConfig(**data)
|
||||||
del data["settings"]["backup_path"]
|
except Exception as e:
|
||||||
|
logger.error(f"[_load_from_legacy_file][Coherence:Failed] Error loading legacy config: {e}")
|
||||||
config = AppConfig(**data)
|
return self._default_config()
|
||||||
logger.info("[_load_config][Coherence:OK] Configuration loaded")
|
# [/DEF:_load_from_legacy_file:Function]
|
||||||
return config
|
|
||||||
except Exception as e:
|
# [DEF:_get_record:Function]
|
||||||
logger.error(f"[_load_config][Coherence:Failed] Error loading config: {e}")
|
# @PURPOSE: Loads config record from DB.
|
||||||
# Fallback but try to preserve existing settings if possible?
|
# @PARAM: session (Session) - DB session.
|
||||||
# For now, return default to be safe, but log the error prominently.
|
# @RETURN: Optional[AppConfigRecord] - Existing record or None.
|
||||||
return AppConfig(
|
def _get_record(self, session: Session) -> Optional[AppConfigRecord]:
|
||||||
environments=[],
|
return session.query(AppConfigRecord).filter(AppConfigRecord.id == "global").first()
|
||||||
settings=GlobalSettings(storage=StorageConfig())
|
# [/DEF:_get_record:Function]
|
||||||
)
|
|
||||||
# [/DEF:_load_config:Function]
|
# [DEF:_load_config:Function]
|
||||||
|
# @PURPOSE: Loads the configuration from DB or performs one-time migration from JSON file.
|
||||||
# [DEF:_save_config_to_disk:Function]
|
# @PRE: DB session factory is available.
|
||||||
# @PURPOSE: Saves the provided configuration object to disk.
|
# @POST: isinstance(return, AppConfig)
|
||||||
# @PRE: isinstance(config, AppConfig)
|
# @RETURN: AppConfig - Loaded configuration.
|
||||||
# @POST: Configuration saved to disk.
|
def _load_config(self) -> AppConfig:
|
||||||
# @PARAM: config (AppConfig) - The configuration to save.
|
with belief_scope("_load_config"):
|
||||||
def _save_config_to_disk(self, config: AppConfig):
|
session: Session = SessionLocal()
|
||||||
with belief_scope("_save_config_to_disk"):
|
try:
|
||||||
logger.debug(f"[_save_config_to_disk][Entry] Saving to {self.config_path}")
|
record = self._get_record(session)
|
||||||
|
if record and record.payload:
|
||||||
# 1. Runtime check of @PRE
|
logger.info("[_load_config][Coherence:OK] Configuration loaded from database")
|
||||||
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
|
return AppConfig(**record.payload)
|
||||||
|
|
||||||
# 2. Logic implementation
|
logger.info("[_load_config][Action] No database config found, migrating legacy config")
|
||||||
try:
|
config = self._load_from_legacy_file()
|
||||||
with open(self.config_path, "w") as f:
|
self._save_config_to_db(config, session=session)
|
||||||
json.dump(config.dict(), f, indent=4)
|
return config
|
||||||
logger.info("[_save_config_to_disk][Action] Configuration saved")
|
except Exception as e:
|
||||||
except Exception as e:
|
logger.error(f"[_load_config][Coherence:Failed] Error loading config from DB: {e}")
|
||||||
logger.error(f"[_save_config_to_disk][Coherence:Failed] Failed to save: {e}")
|
return self._default_config()
|
||||||
# [/DEF:_save_config_to_disk:Function]
|
finally:
|
||||||
|
session.close()
|
||||||
# [DEF:save:Function]
|
# [/DEF:_load_config:Function]
|
||||||
# @PURPOSE: Saves the current configuration state to disk.
|
|
||||||
# @PRE: self.config is set.
|
# [DEF:_save_config_to_db:Function]
|
||||||
# @POST: self._save_config_to_disk called.
|
# @PURPOSE: Saves the provided configuration object to DB.
|
||||||
def save(self):
|
# @PRE: isinstance(config, AppConfig)
|
||||||
with belief_scope("save"):
|
# @POST: Configuration saved to database.
|
||||||
self._save_config_to_disk(self.config)
|
# @PARAM: config (AppConfig) - The configuration to save.
|
||||||
# [/DEF:save:Function]
|
# @PARAM: session (Optional[Session]) - Existing DB session for transactional reuse.
|
||||||
|
def _save_config_to_db(self, config: AppConfig, session: Optional[Session] = None):
|
||||||
# [DEF:get_config:Function]
|
with belief_scope("_save_config_to_db"):
|
||||||
# @PURPOSE: Returns the current configuration.
|
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
|
||||||
# @PRE: self.config is set.
|
|
||||||
# @POST: Returns self.config.
|
owns_session = session is None
|
||||||
# @RETURN: AppConfig - The current configuration.
|
db = session or SessionLocal()
|
||||||
def get_config(self) -> AppConfig:
|
try:
|
||||||
with belief_scope("get_config"):
|
record = self._get_record(db)
|
||||||
return self.config
|
payload = config.model_dump()
|
||||||
# [/DEF:get_config:Function]
|
if record is None:
|
||||||
|
record = AppConfigRecord(id="global", payload=payload)
|
||||||
# [DEF:update_global_settings:Function]
|
db.add(record)
|
||||||
# @PURPOSE: Updates the global settings and persists the change.
|
else:
|
||||||
# @PRE: isinstance(settings, GlobalSettings)
|
record.payload = payload
|
||||||
# @POST: self.config.settings updated and saved.
|
db.commit()
|
||||||
# @PARAM: settings (GlobalSettings) - The new global settings.
|
logger.info("[_save_config_to_db][Action] Configuration saved to database")
|
||||||
def update_global_settings(self, settings: GlobalSettings):
|
except Exception as e:
|
||||||
with belief_scope("update_global_settings"):
|
db.rollback()
|
||||||
logger.info("[update_global_settings][Entry] Updating settings")
|
logger.error(f"[_save_config_to_db][Coherence:Failed] Failed to save: {e}")
|
||||||
|
raise
|
||||||
# 1. Runtime check of @PRE
|
finally:
|
||||||
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
|
if owns_session:
|
||||||
|
db.close()
|
||||||
# 2. Logic implementation
|
# [/DEF:_save_config_to_db:Function]
|
||||||
self.config.settings = settings
|
|
||||||
self.save()
|
# [DEF:save:Function]
|
||||||
|
# @PURPOSE: Saves the current configuration state to DB.
|
||||||
# Reconfigure logger with new settings
|
# @PRE: self.config is set.
|
||||||
configure_logger(settings.logging)
|
# @POST: self._save_config_to_db called.
|
||||||
|
def save(self):
|
||||||
logger.info("[update_global_settings][Exit] Settings updated")
|
with belief_scope("save"):
|
||||||
# [/DEF:update_global_settings:Function]
|
self._save_config_to_db(self.config)
|
||||||
|
# [/DEF:save:Function]
|
||||||
# [DEF:validate_path:Function]
|
|
||||||
# @PURPOSE: Validates if a path exists and is writable.
|
# [DEF:get_config:Function]
|
||||||
# @PRE: path is a string.
|
# @PURPOSE: Returns the current configuration.
|
||||||
# @POST: Returns (bool, str) status.
|
# @RETURN: AppConfig - The current configuration.
|
||||||
# @PARAM: path (str) - The path to validate.
|
def get_config(self) -> AppConfig:
|
||||||
# @RETURN: tuple (bool, str) - (is_valid, message)
|
with belief_scope("get_config"):
|
||||||
def validate_path(self, path: str) -> tuple[bool, str]:
|
return self.config
|
||||||
with belief_scope("validate_path"):
|
# [/DEF:get_config:Function]
|
||||||
p = os.path.abspath(path)
|
|
||||||
if not os.path.exists(p):
|
# [DEF:update_global_settings:Function]
|
||||||
try:
|
# @PURPOSE: Updates the global settings and persists the change.
|
||||||
os.makedirs(p, exist_ok=True)
|
# @PRE: isinstance(settings, GlobalSettings)
|
||||||
except Exception as e:
|
# @POST: self.config.settings updated and saved.
|
||||||
return False, f"Path does not exist and could not be created: {e}"
|
# @PARAM: settings (GlobalSettings) - The new global settings.
|
||||||
|
def update_global_settings(self, settings: GlobalSettings):
|
||||||
if not os.access(p, os.W_OK):
|
with belief_scope("update_global_settings"):
|
||||||
return False, "Path is not writable"
|
logger.info("[update_global_settings][Entry] Updating settings")
|
||||||
|
|
||||||
return True, "Path is valid and writable"
|
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
|
||||||
# [/DEF:validate_path:Function]
|
self.config.settings = settings
|
||||||
|
self.save()
|
||||||
# [DEF:get_environments:Function]
|
configure_logger(settings.logging)
|
||||||
# @PURPOSE: Returns the list of configured environments.
|
logger.info("[update_global_settings][Exit] Settings updated")
|
||||||
# @PRE: self.config is set.
|
# [/DEF:update_global_settings:Function]
|
||||||
# @POST: Returns list of environments.
|
|
||||||
# @RETURN: List[Environment] - List of environments.
|
# [DEF:validate_path:Function]
|
||||||
def get_environments(self) -> List[Environment]:
|
# @PURPOSE: Validates if a path exists and is writable.
|
||||||
with belief_scope("get_environments"):
|
# @PARAM: path (str) - The path to validate.
|
||||||
return self.config.environments
|
# @RETURN: tuple (bool, str) - (is_valid, message)
|
||||||
# [/DEF:get_environments:Function]
|
def validate_path(self, path: str) -> tuple[bool, str]:
|
||||||
|
with belief_scope("validate_path"):
|
||||||
# [DEF:has_environments:Function]
|
p = os.path.abspath(path)
|
||||||
# @PURPOSE: Checks if at least one environment is configured.
|
if not os.path.exists(p):
|
||||||
# @PRE: self.config is set.
|
try:
|
||||||
# @POST: Returns boolean indicating if environments exist.
|
os.makedirs(p, exist_ok=True)
|
||||||
# @RETURN: bool - True if at least one environment exists.
|
except Exception as e:
|
||||||
def has_environments(self) -> bool:
|
return False, f"Path does not exist and could not be created: {e}"
|
||||||
with belief_scope("has_environments"):
|
|
||||||
return len(self.config.environments) > 0
|
if not os.access(p, os.W_OK):
|
||||||
# [/DEF:has_environments:Function]
|
return False, "Path is not writable"
|
||||||
|
|
||||||
# [DEF:get_environment:Function]
|
return True, "Path is valid and writable"
|
||||||
# @PURPOSE: Returns a single environment by ID.
|
# [/DEF:validate_path:Function]
|
||||||
# @PRE: self.config is set and isinstance(env_id, str) and len(env_id) > 0.
|
|
||||||
# @POST: Returns Environment object if found, None otherwise.
|
# [DEF:get_environments:Function]
|
||||||
# @PARAM: env_id (str) - The ID of the environment to retrieve.
|
# @PURPOSE: Returns the list of configured environments.
|
||||||
# @RETURN: Optional[Environment] - The environment with the given ID, or None.
|
# @RETURN: List[Environment] - List of environments.
|
||||||
def get_environment(self, env_id: str) -> Optional[Environment]:
|
def get_environments(self) -> List[Environment]:
|
||||||
with belief_scope("get_environment"):
|
with belief_scope("get_environments"):
|
||||||
for env in self.config.environments:
|
return self.config.environments
|
||||||
if env.id == env_id:
|
# [/DEF:get_environments:Function]
|
||||||
return env
|
|
||||||
return None
|
# [DEF:has_environments:Function]
|
||||||
# [/DEF:get_environment:Function]
|
# @PURPOSE: Checks if at least one environment is configured.
|
||||||
|
# @RETURN: bool - True if at least one environment exists.
|
||||||
# [DEF:add_environment:Function]
|
def has_environments(self) -> bool:
|
||||||
# @PURPOSE: Adds a new environment to the configuration.
|
with belief_scope("has_environments"):
|
||||||
# @PRE: isinstance(env, Environment)
|
return len(self.config.environments) > 0
|
||||||
# @POST: Environment added or updated in self.config.environments.
|
# [/DEF:has_environments:Function]
|
||||||
# @PARAM: env (Environment) - The environment to add.
|
|
||||||
def add_environment(self, env: Environment):
|
# [DEF:get_environment:Function]
|
||||||
with belief_scope("add_environment"):
|
# @PURPOSE: Returns a single environment by ID.
|
||||||
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
# @PARAM: env_id (str) - The ID of the environment to retrieve.
|
||||||
|
# @RETURN: Optional[Environment] - The environment with the given ID, or None.
|
||||||
# 1. Runtime check of @PRE
|
def get_environment(self, env_id: str) -> Optional[Environment]:
|
||||||
assert isinstance(env, Environment), "env must be an instance of Environment"
|
with belief_scope("get_environment"):
|
||||||
|
for env in self.config.environments:
|
||||||
# 2. Logic implementation
|
if env.id == env_id:
|
||||||
# Check for duplicate ID and remove if exists
|
return env
|
||||||
self.config.environments = [e for e in self.config.environments if e.id != env.id]
|
return None
|
||||||
self.config.environments.append(env)
|
# [/DEF:get_environment:Function]
|
||||||
self.save()
|
|
||||||
|
# [DEF:add_environment:Function]
|
||||||
logger.info("[add_environment][Exit] Environment added")
|
# @PURPOSE: Adds a new environment to the configuration.
|
||||||
# [/DEF:add_environment:Function]
|
# @PARAM: env (Environment) - The environment to add.
|
||||||
|
def add_environment(self, env: Environment):
|
||||||
# [DEF:update_environment:Function]
|
with belief_scope("add_environment"):
|
||||||
# @PURPOSE: Updates an existing environment.
|
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
||||||
# @PRE: isinstance(env_id, str) and len(env_id) > 0 and isinstance(updated_env, Environment)
|
assert isinstance(env, Environment), "env must be an instance of Environment"
|
||||||
# @POST: Returns True if environment was found and updated.
|
|
||||||
# @PARAM: env_id (str) - The ID of the environment to update.
|
self.config.environments = [e for e in self.config.environments if e.id != env.id]
|
||||||
# @PARAM: updated_env (Environment) - The updated environment data.
|
self.config.environments.append(env)
|
||||||
# @RETURN: bool - True if updated, False otherwise.
|
self.save()
|
||||||
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
|
logger.info("[add_environment][Exit] Environment added")
|
||||||
with belief_scope("update_environment"):
|
# [/DEF:add_environment:Function]
|
||||||
logger.info(f"[update_environment][Entry] Updating {env_id}")
|
|
||||||
|
# [DEF:update_environment:Function]
|
||||||
# 1. Runtime check of @PRE
|
# @PURPOSE: Updates an existing environment.
|
||||||
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
# @PARAM: env_id (str) - The ID of the environment to update.
|
||||||
assert isinstance(updated_env, Environment), "updated_env must be an instance of Environment"
|
# @PARAM: updated_env (Environment) - The updated environment data.
|
||||||
|
# @RETURN: bool - True if updated, False otherwise.
|
||||||
# 2. Logic implementation
|
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
|
||||||
for i, env in enumerate(self.config.environments):
|
with belief_scope("update_environment"):
|
||||||
if env.id == env_id:
|
logger.info(f"[update_environment][Entry] Updating {env_id}")
|
||||||
# If password is masked, keep the old one
|
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
||||||
if updated_env.password == "********":
|
assert isinstance(updated_env, Environment), "updated_env must be an instance of Environment"
|
||||||
updated_env.password = env.password
|
|
||||||
|
for i, env in enumerate(self.config.environments):
|
||||||
self.config.environments[i] = updated_env
|
if env.id == env_id:
|
||||||
self.save()
|
if updated_env.password == "********":
|
||||||
logger.info(f"[update_environment][Coherence:OK] Updated {env_id}")
|
updated_env.password = env.password
|
||||||
return True
|
|
||||||
|
self.config.environments[i] = updated_env
|
||||||
logger.warning(f"[update_environment][Coherence:Failed] Environment {env_id} not found")
|
self.save()
|
||||||
return False
|
logger.info(f"[update_environment][Coherence:OK] Updated {env_id}")
|
||||||
# [/DEF:update_environment:Function]
|
return True
|
||||||
|
|
||||||
# [DEF:delete_environment:Function]
|
logger.warning(f"[update_environment][Coherence:Failed] Environment {env_id} not found")
|
||||||
# @PURPOSE: Deletes an environment by ID.
|
return False
|
||||||
# @PRE: isinstance(env_id, str) and len(env_id) > 0
|
# [/DEF:update_environment:Function]
|
||||||
# @POST: Environment removed from self.config.environments if it existed.
|
|
||||||
# @PARAM: env_id (str) - The ID of the environment to delete.
|
# [DEF:delete_environment:Function]
|
||||||
def delete_environment(self, env_id: str):
|
# @PURPOSE: Deletes an environment by ID.
|
||||||
with belief_scope("delete_environment"):
|
# @PARAM: env_id (str) - The ID of the environment to delete.
|
||||||
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
|
def delete_environment(self, env_id: str):
|
||||||
|
with belief_scope("delete_environment"):
|
||||||
# 1. Runtime check of @PRE
|
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
|
||||||
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
||||||
|
|
||||||
# 2. Logic implementation
|
original_count = len(self.config.environments)
|
||||||
original_count = len(self.config.environments)
|
self.config.environments = [e for e in self.config.environments if e.id != env_id]
|
||||||
self.config.environments = [e for e in self.config.environments if e.id != env_id]
|
|
||||||
|
if len(self.config.environments) < original_count:
|
||||||
if len(self.config.environments) < original_count:
|
self.save()
|
||||||
self.save()
|
logger.info(f"[delete_environment][Action] Deleted {env_id}")
|
||||||
logger.info(f"[delete_environment][Action] Deleted {env_id}")
|
else:
|
||||||
else:
|
logger.warning(f"[delete_environment][Coherence:Failed] Environment {env_id} not found")
|
||||||
logger.warning(f"[delete_environment][Coherence:Failed] Environment {env_id} not found")
|
# [/DEF:delete_environment:Function]
|
||||||
# [/DEF:delete_environment:Function]
|
|
||||||
|
|
||||||
# [/DEF:ConfigManager:Class]
|
# [/DEF:ConfigManager:Class]
|
||||||
|
# [/DEF:ConfigManagerModule:Module]
|
||||||
# [/DEF:ConfigManagerModule:Module]
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
# @SEMANTICS: config, models, pydantic
|
# @SEMANTICS: config, models, pydantic
|
||||||
# @PURPOSE: Defines the data models for application configuration using Pydantic.
|
# @PURPOSE: Defines the data models for application configuration using Pydantic.
|
||||||
# @LAYER: Core
|
# @LAYER: Core
|
||||||
# @RELATION: READS_FROM -> config.json
|
# @RELATION: READS_FROM -> app_configurations (database)
|
||||||
# @RELATION: USED_BY -> ConfigManager
|
# @RELATION: USED_BY -> ConfigManager
|
||||||
|
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
@@ -33,10 +33,10 @@ class Environment(BaseModel):
|
|||||||
|
|
||||||
# [DEF:LoggingConfig:DataClass]
|
# [DEF:LoggingConfig:DataClass]
|
||||||
# @PURPOSE: Defines the configuration for the application's logging system.
|
# @PURPOSE: Defines the configuration for the application's logging system.
|
||||||
class LoggingConfig(BaseModel):
|
class LoggingConfig(BaseModel):
|
||||||
level: str = "INFO"
|
level: str = "INFO"
|
||||||
task_log_level: str = "INFO" # Minimum level for task-specific logs (DEBUG, INFO, WARNING, ERROR)
|
task_log_level: str = "INFO" # Minimum level for task-specific logs (DEBUG, INFO, WARNING, ERROR)
|
||||||
file_path: Optional[str] = "logs/app.log"
|
file_path: Optional[str] = None
|
||||||
max_bytes: int = 10 * 1024 * 1024
|
max_bytes: int = 10 * 1024 * 1024
|
||||||
backup_count: int = 5
|
backup_count: int = 5
|
||||||
enable_belief_state: bool = True
|
enable_belief_state: bool = True
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
# [DEF:backend.src.core.database:Module]
|
# [DEF:backend.src.core.database:Module]
|
||||||
#
|
#
|
||||||
# @SEMANTICS: database, sqlite, sqlalchemy, session, persistence
|
# @SEMANTICS: database, postgresql, sqlalchemy, session, persistence
|
||||||
# @PURPOSE: Configures the SQLite database connection and session management.
|
# @PURPOSE: Configures database connection and session management (PostgreSQL-first).
|
||||||
# @LAYER: Core
|
# @LAYER: Core
|
||||||
# @RELATION: DEPENDS_ON -> sqlalchemy
|
# @RELATION: DEPENDS_ON -> sqlalchemy
|
||||||
# @RELATION: USES -> backend.src.models.mapping
|
# @RELATION: USES -> backend.src.models.mapping
|
||||||
@@ -14,6 +14,9 @@ from sqlalchemy import create_engine
|
|||||||
from sqlalchemy.orm import sessionmaker
|
from sqlalchemy.orm import sessionmaker
|
||||||
from ..models.mapping import Base
|
from ..models.mapping import Base
|
||||||
# Import models to ensure they're registered with Base
|
# Import models to ensure they're registered with Base
|
||||||
|
from ..models import task as _task_models # noqa: F401
|
||||||
|
from ..models import auth as _auth_models # noqa: F401
|
||||||
|
from ..models import config as _config_models # noqa: F401
|
||||||
from .logger import belief_scope
|
from .logger import belief_scope
|
||||||
from .auth.config import auth_config
|
from .auth.config import auth_config
|
||||||
import os
|
import os
|
||||||
@@ -21,44 +24,50 @@ from pathlib import Path
|
|||||||
# [/SECTION]
|
# [/SECTION]
|
||||||
|
|
||||||
# [DEF:BASE_DIR:Variable]
|
# [DEF:BASE_DIR:Variable]
|
||||||
# @PURPOSE: Base directory for the backend (where .db files should reside).
|
# @PURPOSE: Base directory for the backend.
|
||||||
BASE_DIR = Path(__file__).resolve().parent.parent.parent
|
BASE_DIR = Path(__file__).resolve().parent.parent.parent
|
||||||
# [/DEF:BASE_DIR:Variable]
|
# [/DEF:BASE_DIR:Variable]
|
||||||
|
|
||||||
# [DEF:DATABASE_URL:Constant]
|
# [DEF:DATABASE_URL:Constant]
|
||||||
# @PURPOSE: URL for the main mappings database.
|
# @PURPOSE: URL for the main application database.
|
||||||
DATABASE_URL = os.getenv("DATABASE_URL", f"sqlite:///{BASE_DIR}/mappings.db")
|
DEFAULT_POSTGRES_URL = os.getenv(
|
||||||
|
"POSTGRES_URL",
|
||||||
|
"postgresql+psycopg2://postgres:postgres@localhost:5432/ss_tools",
|
||||||
|
)
|
||||||
|
DATABASE_URL = os.getenv("DATABASE_URL", DEFAULT_POSTGRES_URL)
|
||||||
# [/DEF:DATABASE_URL:Constant]
|
# [/DEF:DATABASE_URL:Constant]
|
||||||
|
|
||||||
# [DEF:TASKS_DATABASE_URL:Constant]
|
# [DEF:TASKS_DATABASE_URL:Constant]
|
||||||
# @PURPOSE: URL for the tasks execution database.
|
# @PURPOSE: URL for the tasks execution database.
|
||||||
TASKS_DATABASE_URL = os.getenv("TASKS_DATABASE_URL", f"sqlite:///{BASE_DIR}/tasks.db")
|
# Defaults to DATABASE_URL to keep task logs in the same PostgreSQL instance.
|
||||||
|
TASKS_DATABASE_URL = os.getenv("TASKS_DATABASE_URL", DATABASE_URL)
|
||||||
# [/DEF:TASKS_DATABASE_URL:Constant]
|
# [/DEF:TASKS_DATABASE_URL:Constant]
|
||||||
|
|
||||||
# [DEF:AUTH_DATABASE_URL:Constant]
|
# [DEF:AUTH_DATABASE_URL:Constant]
|
||||||
# @PURPOSE: URL for the authentication database.
|
# @PURPOSE: URL for the authentication database.
|
||||||
AUTH_DATABASE_URL = os.getenv("AUTH_DATABASE_URL", auth_config.AUTH_DATABASE_URL)
|
AUTH_DATABASE_URL = os.getenv("AUTH_DATABASE_URL", auth_config.AUTH_DATABASE_URL)
|
||||||
# If it's a relative sqlite path starting with ./backend/, fix it to be absolute or relative to BASE_DIR
|
|
||||||
if AUTH_DATABASE_URL.startswith("sqlite:///./backend/"):
|
|
||||||
AUTH_DATABASE_URL = AUTH_DATABASE_URL.replace("sqlite:///./backend/", f"sqlite:///{BASE_DIR}/")
|
|
||||||
elif AUTH_DATABASE_URL.startswith("sqlite:///./") and not AUTH_DATABASE_URL.startswith("sqlite:///./backend/"):
|
|
||||||
# If it's just ./ but we are in backend, it's fine, but let's make it absolute for robustness
|
|
||||||
AUTH_DATABASE_URL = AUTH_DATABASE_URL.replace("sqlite:///./", f"sqlite:///{BASE_DIR}/")
|
|
||||||
# [/DEF:AUTH_DATABASE_URL:Constant]
|
# [/DEF:AUTH_DATABASE_URL:Constant]
|
||||||
|
|
||||||
# [DEF:engine:Variable]
|
# [DEF:engine:Variable]
|
||||||
|
def _build_engine(db_url: str):
|
||||||
|
with belief_scope("_build_engine"):
|
||||||
|
if db_url.startswith("sqlite"):
|
||||||
|
return create_engine(db_url, connect_args={"check_same_thread": False})
|
||||||
|
return create_engine(db_url, pool_pre_ping=True)
|
||||||
|
|
||||||
|
|
||||||
# @PURPOSE: SQLAlchemy engine for mappings database.
|
# @PURPOSE: SQLAlchemy engine for mappings database.
|
||||||
engine = create_engine(DATABASE_URL, connect_args={"check_same_thread": False})
|
engine = _build_engine(DATABASE_URL)
|
||||||
# [/DEF:engine:Variable]
|
# [/DEF:engine:Variable]
|
||||||
|
|
||||||
# [DEF:tasks_engine:Variable]
|
# [DEF:tasks_engine:Variable]
|
||||||
# @PURPOSE: SQLAlchemy engine for tasks database.
|
# @PURPOSE: SQLAlchemy engine for tasks database.
|
||||||
tasks_engine = create_engine(TASKS_DATABASE_URL, connect_args={"check_same_thread": False})
|
tasks_engine = _build_engine(TASKS_DATABASE_URL)
|
||||||
# [/DEF:tasks_engine:Variable]
|
# [/DEF:tasks_engine:Variable]
|
||||||
|
|
||||||
# [DEF:auth_engine:Variable]
|
# [DEF:auth_engine:Variable]
|
||||||
# @PURPOSE: SQLAlchemy engine for authentication database.
|
# @PURPOSE: SQLAlchemy engine for authentication database.
|
||||||
auth_engine = create_engine(AUTH_DATABASE_URL, connect_args={"check_same_thread": False})
|
auth_engine = _build_engine(AUTH_DATABASE_URL)
|
||||||
# [/DEF:auth_engine:Variable]
|
# [/DEF:auth_engine:Variable]
|
||||||
|
|
||||||
# [DEF:SessionLocal:Class]
|
# [DEF:SessionLocal:Class]
|
||||||
|
|||||||
@@ -20,14 +20,14 @@ from .core.auth.jwt import decode_token
|
|||||||
from .core.auth.repository import AuthRepository
|
from .core.auth.repository import AuthRepository
|
||||||
from .models.auth import User
|
from .models.auth import User
|
||||||
|
|
||||||
# Initialize singletons
|
# Initialize singletons
|
||||||
# Use absolute path relative to this file to ensure plugins are found regardless of CWD
|
# Use absolute path relative to this file to ensure plugins are found regardless of CWD
|
||||||
project_root = Path(__file__).parent.parent.parent
|
project_root = Path(__file__).parent.parent.parent
|
||||||
config_path = project_root / "config.json"
|
config_path = project_root / "config.json"
|
||||||
config_manager = ConfigManager(config_path=str(config_path))
|
|
||||||
|
# Initialize database before services that use persisted configuration.
|
||||||
# Initialize database before any other services that might use it
|
init_db()
|
||||||
init_db()
|
config_manager = ConfigManager(config_path=str(config_path))
|
||||||
|
|
||||||
# [DEF:get_config_manager:Function]
|
# [DEF:get_config_manager:Function]
|
||||||
# @PURPOSE: Dependency injector for ConfigManager.
|
# @PURPOSE: Dependency injector for ConfigManager.
|
||||||
|
|||||||
26
backend/src/models/config.py
Normal file
26
backend/src/models/config.py
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
# [DEF:backend.src.models.config:Module]
|
||||||
|
#
|
||||||
|
# @TIER: STANDARD
|
||||||
|
# @SEMANTICS: database, config, settings, sqlalchemy
|
||||||
|
# @PURPOSE: Defines database schema for persisted application configuration.
|
||||||
|
# @LAYER: Domain
|
||||||
|
# @RELATION: DEPENDS_ON -> sqlalchemy
|
||||||
|
|
||||||
|
from sqlalchemy import Column, String, DateTime, JSON
|
||||||
|
from sqlalchemy.sql import func
|
||||||
|
|
||||||
|
from .mapping import Base
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:AppConfigRecord:Class]
|
||||||
|
# @PURPOSE: Stores the single source of truth for application configuration.
|
||||||
|
class AppConfigRecord(Base):
|
||||||
|
__tablename__ = "app_configurations"
|
||||||
|
|
||||||
|
id = Column(String, primary_key=True)
|
||||||
|
payload = Column(JSON, nullable=False)
|
||||||
|
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:AppConfigRecord:Class]
|
||||||
|
# [/DEF:backend.src.models.config:Module]
|
||||||
@@ -22,6 +22,8 @@ class FileCategory(str, Enum):
|
|||||||
# @PURPOSE: Configuration model for the storage system, defining paths and naming patterns.
|
# @PURPOSE: Configuration model for the storage system, defining paths and naming patterns.
|
||||||
class StorageConfig(BaseModel):
|
class StorageConfig(BaseModel):
|
||||||
root_path: str = Field(default="backups", description="Absolute path to the storage root directory.")
|
root_path: str = Field(default="backups", description="Absolute path to the storage root directory.")
|
||||||
|
backup_path: str = Field(default="backups", description="Subpath for backups.")
|
||||||
|
repo_path: str = Field(default="repositorys", description="Subpath for repositories.")
|
||||||
backup_structure_pattern: str = Field(default="{category}/", description="Pattern for backup directory structure.")
|
backup_structure_pattern: str = Field(default="{category}/", description="Pattern for backup directory structure.")
|
||||||
repo_structure_pattern: str = Field(default="{category}/", description="Pattern for repository directory structure.")
|
repo_structure_pattern: str = Field(default="{category}/", description="Pattern for repository directory structure.")
|
||||||
filename_pattern: str = Field(default="{name}_{timestamp}", description="Pattern for filenames.")
|
filename_pattern: str = Field(default="{name}_{timestamp}", description="Pattern for filenames.")
|
||||||
|
|||||||
350
backend/src/scripts/migrate_sqlite_to_postgres.py
Normal file
350
backend/src/scripts/migrate_sqlite_to_postgres.py
Normal file
@@ -0,0 +1,350 @@
|
|||||||
|
# [DEF:backend.src.scripts.migrate_sqlite_to_postgres:Module]
|
||||||
|
#
|
||||||
|
# @SEMANTICS: migration, sqlite, postgresql, config, task_logs, task_records
|
||||||
|
# @PURPOSE: Migrates legacy config and task history from SQLite/file storage to PostgreSQL.
|
||||||
|
# @LAYER: Scripts
|
||||||
|
# @RELATION: READS_FROM -> backend/tasks.db
|
||||||
|
# @RELATION: READS_FROM -> backend/config.json
|
||||||
|
# @RELATION: WRITES_TO -> postgresql.task_records
|
||||||
|
# @RELATION: WRITES_TO -> postgresql.task_logs
|
||||||
|
# @RELATION: WRITES_TO -> postgresql.app_configurations
|
||||||
|
#
|
||||||
|
# @INVARIANT: Script is idempotent for task_records and app_configurations.
|
||||||
|
|
||||||
|
# [SECTION: IMPORTS]
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sqlite3
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, Iterable, Optional
|
||||||
|
|
||||||
|
from sqlalchemy import create_engine, text
|
||||||
|
from sqlalchemy.exc import SQLAlchemyError
|
||||||
|
|
||||||
|
from src.core.logger import belief_scope, logger
|
||||||
|
# [/SECTION]
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:Constants:Section]
|
||||||
|
DEFAULT_TARGET_URL = os.getenv(
|
||||||
|
"DATABASE_URL",
|
||||||
|
os.getenv("POSTGRES_URL", "postgresql+psycopg2://postgres:postgres@localhost:5432/ss_tools"),
|
||||||
|
)
|
||||||
|
# [/DEF:Constants:Section]
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:_json_load_if_needed:Function]
|
||||||
|
# @PURPOSE: Parses JSON-like values from SQLite TEXT/JSON columns to Python objects.
|
||||||
|
def _json_load_if_needed(value: Any) -> Any:
|
||||||
|
with belief_scope("_json_load_if_needed"):
|
||||||
|
if value is None:
|
||||||
|
return None
|
||||||
|
if isinstance(value, (dict, list)):
|
||||||
|
return value
|
||||||
|
if isinstance(value, str):
|
||||||
|
raw = value.strip()
|
||||||
|
if not raw:
|
||||||
|
return None
|
||||||
|
if raw[0] in "{[":
|
||||||
|
try:
|
||||||
|
return json.loads(raw)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
return value
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:_find_legacy_config_path:Function]
|
||||||
|
# @PURPOSE: Resolves the existing legacy config.json path from candidates.
|
||||||
|
def _find_legacy_config_path(explicit_path: Optional[str]) -> Optional[Path]:
|
||||||
|
with belief_scope("_find_legacy_config_path"):
|
||||||
|
if explicit_path:
|
||||||
|
p = Path(explicit_path)
|
||||||
|
return p if p.exists() else None
|
||||||
|
|
||||||
|
candidates = [
|
||||||
|
Path("backend/config.json"),
|
||||||
|
Path("config.json"),
|
||||||
|
]
|
||||||
|
for candidate in candidates:
|
||||||
|
if candidate.exists():
|
||||||
|
return candidate
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:_connect_sqlite:Function]
|
||||||
|
# @PURPOSE: Opens a SQLite connection with row factory.
|
||||||
|
def _connect_sqlite(path: Path) -> sqlite3.Connection:
|
||||||
|
with belief_scope("_connect_sqlite"):
|
||||||
|
conn = sqlite3.connect(str(path))
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
return conn
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:_ensure_target_schema:Function]
|
||||||
|
# @PURPOSE: Ensures required PostgreSQL tables exist before migration.
|
||||||
|
def _ensure_target_schema(engine) -> None:
|
||||||
|
with belief_scope("_ensure_target_schema"):
|
||||||
|
stmts: Iterable[str] = (
|
||||||
|
"""
|
||||||
|
CREATE TABLE IF NOT EXISTS app_configurations (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
payload JSONB NOT NULL,
|
||||||
|
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||||
|
)
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TABLE IF NOT EXISTS task_records (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
type TEXT NOT NULL,
|
||||||
|
status TEXT NOT NULL,
|
||||||
|
environment_id TEXT NULL,
|
||||||
|
started_at TIMESTAMPTZ NULL,
|
||||||
|
finished_at TIMESTAMPTZ NULL,
|
||||||
|
logs JSONB NULL,
|
||||||
|
error TEXT NULL,
|
||||||
|
result JSONB NULL,
|
||||||
|
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||||
|
params JSONB NULL
|
||||||
|
)
|
||||||
|
""",
|
||||||
|
"""
|
||||||
|
CREATE TABLE IF NOT EXISTS task_logs (
|
||||||
|
id INTEGER PRIMARY KEY,
|
||||||
|
task_id TEXT NOT NULL,
|
||||||
|
timestamp TIMESTAMPTZ NOT NULL,
|
||||||
|
level VARCHAR(16) NOT NULL,
|
||||||
|
source VARCHAR(64) NOT NULL DEFAULT 'system',
|
||||||
|
message TEXT NOT NULL,
|
||||||
|
metadata_json TEXT NULL,
|
||||||
|
CONSTRAINT fk_task_logs_task
|
||||||
|
FOREIGN KEY(task_id)
|
||||||
|
REFERENCES task_records(id)
|
||||||
|
ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
""",
|
||||||
|
"CREATE INDEX IF NOT EXISTS ix_task_logs_task_timestamp ON task_logs (task_id, timestamp)",
|
||||||
|
"CREATE INDEX IF NOT EXISTS ix_task_logs_task_level ON task_logs (task_id, level)",
|
||||||
|
"CREATE INDEX IF NOT EXISTS ix_task_logs_task_source ON task_logs (task_id, source)",
|
||||||
|
"""
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM pg_class WHERE relkind = 'S' AND relname = 'task_logs_id_seq'
|
||||||
|
) THEN
|
||||||
|
PERFORM 1;
|
||||||
|
ELSE
|
||||||
|
CREATE SEQUENCE task_logs_id_seq OWNED BY task_logs.id;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
""",
|
||||||
|
"ALTER TABLE task_logs ALTER COLUMN id SET DEFAULT nextval('task_logs_id_seq')",
|
||||||
|
)
|
||||||
|
with engine.begin() as conn:
|
||||||
|
for stmt in stmts:
|
||||||
|
conn.execute(text(stmt))
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:_migrate_config:Function]
|
||||||
|
# @PURPOSE: Migrates legacy config.json into app_configurations(global).
|
||||||
|
def _migrate_config(engine, legacy_config_path: Optional[Path]) -> int:
|
||||||
|
with belief_scope("_migrate_config"):
|
||||||
|
if legacy_config_path is None:
|
||||||
|
logger.info("[_migrate_config][Action] No legacy config.json found, skipping")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
payload = json.loads(legacy_config_path.read_text(encoding="utf-8"))
|
||||||
|
with engine.begin() as conn:
|
||||||
|
conn.execute(
|
||||||
|
text(
|
||||||
|
"""
|
||||||
|
INSERT INTO app_configurations (id, payload, updated_at)
|
||||||
|
VALUES ('global', CAST(:payload AS JSONB), NOW())
|
||||||
|
ON CONFLICT (id)
|
||||||
|
DO UPDATE SET payload = EXCLUDED.payload, updated_at = NOW()
|
||||||
|
"""
|
||||||
|
),
|
||||||
|
{"payload": json.dumps(payload, ensure_ascii=True)},
|
||||||
|
)
|
||||||
|
logger.info("[_migrate_config][Coherence:OK] Config migrated from %s", legacy_config_path)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:_migrate_tasks_and_logs:Function]
|
||||||
|
# @PURPOSE: Migrates task_records and task_logs from SQLite into PostgreSQL.
|
||||||
|
def _migrate_tasks_and_logs(engine, sqlite_conn: sqlite3.Connection) -> Dict[str, int]:
|
||||||
|
with belief_scope("_migrate_tasks_and_logs"):
|
||||||
|
stats = {"task_records_total": 0, "task_records_inserted": 0, "task_logs_total": 0, "task_logs_inserted": 0}
|
||||||
|
|
||||||
|
rows = sqlite_conn.execute(
|
||||||
|
"""
|
||||||
|
SELECT id, type, status, environment_id, started_at, finished_at, logs, error, result, created_at, params
|
||||||
|
FROM task_records
|
||||||
|
ORDER BY created_at ASC
|
||||||
|
"""
|
||||||
|
).fetchall()
|
||||||
|
stats["task_records_total"] = len(rows)
|
||||||
|
|
||||||
|
with engine.begin() as conn:
|
||||||
|
existing_env_ids = {
|
||||||
|
row[0]
|
||||||
|
for row in conn.execute(text("SELECT id FROM environments")).fetchall()
|
||||||
|
}
|
||||||
|
for row in rows:
|
||||||
|
params_obj = _json_load_if_needed(row["params"])
|
||||||
|
result_obj = _json_load_if_needed(row["result"])
|
||||||
|
logs_obj = _json_load_if_needed(row["logs"])
|
||||||
|
environment_id = row["environment_id"]
|
||||||
|
if environment_id and environment_id not in existing_env_ids:
|
||||||
|
# Legacy task may reference environments that were not migrated; keep task row and drop FK value.
|
||||||
|
environment_id = None
|
||||||
|
|
||||||
|
res = conn.execute(
|
||||||
|
text(
|
||||||
|
"""
|
||||||
|
INSERT INTO task_records (
|
||||||
|
id, type, status, environment_id, started_at, finished_at,
|
||||||
|
logs, error, result, created_at, params
|
||||||
|
) VALUES (
|
||||||
|
:id, :type, :status, :environment_id, :started_at, :finished_at,
|
||||||
|
CAST(:logs AS JSONB), :error, CAST(:result AS JSONB), :created_at, CAST(:params AS JSONB)
|
||||||
|
)
|
||||||
|
ON CONFLICT (id) DO NOTHING
|
||||||
|
"""
|
||||||
|
),
|
||||||
|
{
|
||||||
|
"id": row["id"],
|
||||||
|
"type": row["type"],
|
||||||
|
"status": row["status"],
|
||||||
|
"environment_id": environment_id,
|
||||||
|
"started_at": row["started_at"],
|
||||||
|
"finished_at": row["finished_at"],
|
||||||
|
"logs": json.dumps(logs_obj, ensure_ascii=True) if logs_obj is not None else None,
|
||||||
|
"error": row["error"],
|
||||||
|
"result": json.dumps(result_obj, ensure_ascii=True) if result_obj is not None else None,
|
||||||
|
"created_at": row["created_at"],
|
||||||
|
"params": json.dumps(params_obj, ensure_ascii=True) if params_obj is not None else None,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
if res.rowcount and res.rowcount > 0:
|
||||||
|
stats["task_records_inserted"] += int(res.rowcount)
|
||||||
|
|
||||||
|
log_rows = sqlite_conn.execute(
|
||||||
|
"""
|
||||||
|
SELECT id, task_id, timestamp, level, source, message, metadata_json
|
||||||
|
FROM task_logs
|
||||||
|
ORDER BY id ASC
|
||||||
|
"""
|
||||||
|
).fetchall()
|
||||||
|
stats["task_logs_total"] = len(log_rows)
|
||||||
|
|
||||||
|
with engine.begin() as conn:
|
||||||
|
for row in log_rows:
|
||||||
|
# Preserve original IDs to keep migration idempotent.
|
||||||
|
res = conn.execute(
|
||||||
|
text(
|
||||||
|
"""
|
||||||
|
INSERT INTO task_logs (id, task_id, timestamp, level, source, message, metadata_json)
|
||||||
|
VALUES (:id, :task_id, :timestamp, :level, :source, :message, :metadata_json)
|
||||||
|
ON CONFLICT (id) DO NOTHING
|
||||||
|
"""
|
||||||
|
),
|
||||||
|
{
|
||||||
|
"id": row["id"],
|
||||||
|
"task_id": row["task_id"],
|
||||||
|
"timestamp": row["timestamp"],
|
||||||
|
"level": row["level"],
|
||||||
|
"source": row["source"] or "system",
|
||||||
|
"message": row["message"],
|
||||||
|
"metadata_json": row["metadata_json"],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
if res.rowcount and res.rowcount > 0:
|
||||||
|
stats["task_logs_inserted"] += int(res.rowcount)
|
||||||
|
|
||||||
|
# Ensure sequence is aligned after explicit id inserts.
|
||||||
|
conn.execute(
|
||||||
|
text(
|
||||||
|
"""
|
||||||
|
SELECT setval(
|
||||||
|
'task_logs_id_seq',
|
||||||
|
COALESCE((SELECT MAX(id) FROM task_logs), 1),
|
||||||
|
TRUE
|
||||||
|
)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"[_migrate_tasks_and_logs][Coherence:OK] task_records=%s/%s task_logs=%s/%s",
|
||||||
|
stats["task_records_inserted"],
|
||||||
|
stats["task_records_total"],
|
||||||
|
stats["task_logs_inserted"],
|
||||||
|
stats["task_logs_total"],
|
||||||
|
)
|
||||||
|
return stats
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:run_migration:Function]
|
||||||
|
# @PURPOSE: Orchestrates migration from SQLite/file to PostgreSQL.
|
||||||
|
def run_migration(sqlite_path: Path, target_url: str, legacy_config_path: Optional[Path]) -> Dict[str, int]:
|
||||||
|
with belief_scope("run_migration"):
|
||||||
|
logger.info("[run_migration][Entry] sqlite=%s target=%s", sqlite_path, target_url)
|
||||||
|
if not sqlite_path.exists():
|
||||||
|
raise FileNotFoundError(f"SQLite source not found: {sqlite_path}")
|
||||||
|
|
||||||
|
sqlite_conn = _connect_sqlite(sqlite_path)
|
||||||
|
engine = create_engine(target_url, pool_pre_ping=True)
|
||||||
|
try:
|
||||||
|
_ensure_target_schema(engine)
|
||||||
|
config_upserted = _migrate_config(engine, legacy_config_path)
|
||||||
|
stats = _migrate_tasks_and_logs(engine, sqlite_conn)
|
||||||
|
stats["config_upserted"] = config_upserted
|
||||||
|
return stats
|
||||||
|
finally:
|
||||||
|
sqlite_conn.close()
|
||||||
|
|
||||||
|
|
||||||
|
# [DEF:main:Function]
|
||||||
|
# @PURPOSE: CLI entrypoint.
|
||||||
|
def main() -> int:
|
||||||
|
with belief_scope("main"):
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Migrate legacy config.json and task logs from SQLite to PostgreSQL.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--sqlite-path",
|
||||||
|
default="backend/tasks.db",
|
||||||
|
help="Path to source SQLite DB with task_records/task_logs (default: backend/tasks.db).",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--target-url",
|
||||||
|
default=DEFAULT_TARGET_URL,
|
||||||
|
help="Target PostgreSQL SQLAlchemy URL (default: DATABASE_URL/POSTGRES_URL env).",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--config-path",
|
||||||
|
default=None,
|
||||||
|
help="Optional path to legacy config.json (auto-detected when omitted).",
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
sqlite_path = Path(args.sqlite_path)
|
||||||
|
legacy_config_path = _find_legacy_config_path(args.config_path)
|
||||||
|
try:
|
||||||
|
stats = run_migration(sqlite_path=sqlite_path, target_url=args.target_url, legacy_config_path=legacy_config_path)
|
||||||
|
print("Migration completed.")
|
||||||
|
print(json.dumps(stats, indent=2))
|
||||||
|
return 0
|
||||||
|
except (SQLAlchemyError, OSError, sqlite3.Error, ValueError) as e:
|
||||||
|
logger.error("[main][Coherence:Failed] Migration failed: %s", e)
|
||||||
|
print(f"Migration failed: {e}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
|
# [/DEF:main:Function]
|
||||||
|
|
||||||
|
# [/DEF:backend.src.scripts.migrate_sqlite_to_postgres:Module]
|
||||||
42
docker-compose.yml
Normal file
42
docker-compose.yml
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
services:
|
||||||
|
db:
|
||||||
|
image: ${POSTGRES_IMAGE:-postgres:16-alpine}
|
||||||
|
container_name: ss_tools_db
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: ss_tools
|
||||||
|
POSTGRES_USER: postgres
|
||||||
|
POSTGRES_PASSWORD: postgres
|
||||||
|
ports:
|
||||||
|
- "${POSTGRES_HOST_PORT:-5432}:5432"
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U postgres -d ss_tools"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 10
|
||||||
|
|
||||||
|
app:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
container_name: ss_tools_app
|
||||||
|
restart: unless-stopped
|
||||||
|
depends_on:
|
||||||
|
db:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
POSTGRES_URL: postgresql+psycopg2://postgres:postgres@db:5432/ss_tools
|
||||||
|
DATABASE_URL: postgresql+psycopg2://postgres:postgres@db:5432/ss_tools
|
||||||
|
TASKS_DATABASE_URL: postgresql+psycopg2://postgres:postgres@db:5432/ss_tools
|
||||||
|
AUTH_DATABASE_URL: postgresql+psycopg2://postgres:postgres@db:5432/ss_tools
|
||||||
|
BACKEND_PORT: 8000
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
volumes:
|
||||||
|
- ./backups:/app/backups
|
||||||
|
- ./backend/git_repos:/app/backend/git_repos
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
@@ -1,13 +1,14 @@
|
|||||||
# [DEF:generate_semantic_map:Module]
|
# [DEF:generate_semantic_map:Module]
|
||||||
#
|
#
|
||||||
# @TIER: CRITICAL
|
# @TIER: CRITICAL
|
||||||
# @SEMANTICS: semantic_analysis, parser, map_generator, compliance_checker, tier_validation, svelte_props, data_flow
|
# @SEMANTICS: semantic_analysis, parser, map_generator, compliance_checker, tier_validation, svelte_props, data_flow, module_map
|
||||||
# @PURPOSE: Scans the codebase to generate a Semantic Map and Compliance Report based on the System Standard.
|
# @PURPOSE: Scans the codebase to generate a Semantic Map, Module Map, and Compliance Report based on the System Standard.
|
||||||
# @LAYER: DevOps/Tooling
|
# @LAYER: DevOps/Tooling
|
||||||
# @INVARIANT: All DEF anchors must have matching closing anchors; TIER determines validation strictness.
|
# @INVARIANT: All DEF anchors must have matching closing anchors; TIER determines validation strictness.
|
||||||
# @RELATION: READS -> FileSystem
|
# @RELATION: READS -> FileSystem
|
||||||
# @RELATION: PRODUCES -> semantics/semantic_map.json
|
# @RELATION: PRODUCES -> semantics/semantic_map.json
|
||||||
# @RELATION: PRODUCES -> .ai/PROJECT_MAP.md
|
# @RELATION: PRODUCES -> .ai/PROJECT_MAP.md
|
||||||
|
# @RELATION: PRODUCES -> .ai/MODULE_MAP.md
|
||||||
# @RELATION: PRODUCES -> semantics/reports/semantic_report_*.md
|
# @RELATION: PRODUCES -> semantics/reports/semantic_report_*.md
|
||||||
|
|
||||||
# [SECTION: IMPORTS]
|
# [SECTION: IMPORTS]
|
||||||
@@ -83,6 +84,7 @@ IGNORE_FILES = {
|
|||||||
}
|
}
|
||||||
OUTPUT_JSON = "semantics/semantic_map.json"
|
OUTPUT_JSON = "semantics/semantic_map.json"
|
||||||
OUTPUT_COMPRESSED_MD = ".ai/PROJECT_MAP.md"
|
OUTPUT_COMPRESSED_MD = ".ai/PROJECT_MAP.md"
|
||||||
|
OUTPUT_MODULE_MAP_MD = ".ai/MODULE_MAP.md"
|
||||||
REPORTS_DIR = "semantics/reports"
|
REPORTS_DIR = "semantics/reports"
|
||||||
|
|
||||||
# Tier-based mandatory tags
|
# Tier-based mandatory tags
|
||||||
@@ -830,6 +832,7 @@ class SemanticMapGenerator:
|
|||||||
|
|
||||||
self._generate_report()
|
self._generate_report()
|
||||||
self._generate_compressed_map()
|
self._generate_compressed_map()
|
||||||
|
self._generate_module_map()
|
||||||
# [/DEF:_generate_artifacts:Function]
|
# [/DEF:_generate_artifacts:Function]
|
||||||
|
|
||||||
# [DEF:_generate_report:Function]
|
# [DEF:_generate_report:Function]
|
||||||
@@ -990,6 +993,163 @@ class SemanticMapGenerator:
|
|||||||
self._write_entity_md(f, child, level + 1)
|
self._write_entity_md(f, child, level + 1)
|
||||||
# [/DEF:_write_entity_md:Function]
|
# [/DEF:_write_entity_md:Function]
|
||||||
|
|
||||||
|
# [DEF:_generate_module_map:Function]
|
||||||
|
# @TIER: CRITICAL
|
||||||
|
# @PURPOSE: Generates a module-centric map grouping entities by directory structure.
|
||||||
|
# @PRE: Entities have been processed.
|
||||||
|
# @POST: Markdown module map is written to .ai/MODULE_MAP.md.
|
||||||
|
def _generate_module_map(self):
|
||||||
|
with belief_scope("_generate_module_map"):
|
||||||
|
os.makedirs(os.path.dirname(OUTPUT_MODULE_MAP_MD), exist_ok=True)
|
||||||
|
|
||||||
|
# Group entities by directory/module
|
||||||
|
modules: Dict[str, Dict[str, Any]] = {}
|
||||||
|
|
||||||
|
# [DEF:_get_module_path:Function]
|
||||||
|
# @TIER: STANDARD
|
||||||
|
# @PURPOSE: Extracts the module path from a file path.
|
||||||
|
# @PRE: file_path is a valid relative path.
|
||||||
|
# @POST: Returns a module path string.
|
||||||
|
def _get_module_path(file_path: str) -> str:
|
||||||
|
# Convert file path to module-like path
|
||||||
|
parts = file_path.replace(os.sep, '/').split('/')
|
||||||
|
# Remove filename
|
||||||
|
if len(parts) > 1:
|
||||||
|
return '/'.join(parts[:-1])
|
||||||
|
return 'root'
|
||||||
|
# [/DEF:_get_module_path:Function]
|
||||||
|
|
||||||
|
# [DEF:_collect_all_entities:Function]
|
||||||
|
# @TIER: STANDARD
|
||||||
|
# @PURPOSE: Flattens entity tree for easier grouping.
|
||||||
|
# @PRE: entity list is valid.
|
||||||
|
# @POST: Returns flat list of all entities with their hierarchy.
|
||||||
|
def _collect_all_entities(entities: List[SemanticEntity], result: List[Tuple[str, SemanticEntity]]):
|
||||||
|
for e in entities:
|
||||||
|
result.append((_get_module_path(e.file_path), e))
|
||||||
|
_collect_all_entities(e.children, result)
|
||||||
|
# [/DEF:_collect_all_entities:Function]
|
||||||
|
|
||||||
|
# Collect all entities
|
||||||
|
all_entities: List[Tuple[str, SemanticEntity]] = []
|
||||||
|
_collect_all_entities(self.entities, all_entities)
|
||||||
|
|
||||||
|
# Group by module path
|
||||||
|
for module_path, entity in all_entities:
|
||||||
|
if module_path not in modules:
|
||||||
|
modules[module_path] = {
|
||||||
|
'entities': [],
|
||||||
|
'files': set(),
|
||||||
|
'layers': set(),
|
||||||
|
'tiers': {'CRITICAL': 0, 'STANDARD': 0, 'TRIVIAL': 0},
|
||||||
|
'relations': []
|
||||||
|
}
|
||||||
|
modules[module_path]['entities'].append(entity)
|
||||||
|
modules[module_path]['files'].add(entity.file_path)
|
||||||
|
if entity.tags.get('LAYER'):
|
||||||
|
modules[module_path]['layers'].add(entity.tags.get('LAYER'))
|
||||||
|
tier = entity.get_tier().value
|
||||||
|
modules[module_path]['tiers'][tier] = modules[module_path]['tiers'].get(tier, 0) + 1
|
||||||
|
for rel in entity.relations:
|
||||||
|
modules[module_path]['relations'].append(rel)
|
||||||
|
|
||||||
|
# Write module map
|
||||||
|
with open(OUTPUT_MODULE_MAP_MD, 'w', encoding='utf-8') as f:
|
||||||
|
f.write("# Module Map\n\n")
|
||||||
|
f.write("> High-level module structure for AI Context. Generated automatically.\n\n")
|
||||||
|
f.write(f"**Generated:** {datetime.datetime.now().isoformat()}\n\n")
|
||||||
|
|
||||||
|
# Summary statistics
|
||||||
|
total_modules = len(modules)
|
||||||
|
total_entities = len(all_entities)
|
||||||
|
f.write("## Summary\n\n")
|
||||||
|
f.write(f"- **Total Modules:** {total_modules}\n")
|
||||||
|
f.write(f"- **Total Entities:** {total_entities}\n\n")
|
||||||
|
|
||||||
|
# Module hierarchy
|
||||||
|
f.write("## Module Hierarchy\n\n")
|
||||||
|
|
||||||
|
# Sort modules by path for consistent output
|
||||||
|
sorted_modules = sorted(modules.items(), key=lambda x: x[0])
|
||||||
|
|
||||||
|
for module_path, data in sorted_modules:
|
||||||
|
# Calculate module depth for indentation
|
||||||
|
depth = module_path.count('/')
|
||||||
|
indent = " " * depth
|
||||||
|
|
||||||
|
# Module header
|
||||||
|
module_name = module_path.split('/')[-1] if module_path != 'root' else 'root'
|
||||||
|
f.write(f"{indent}### 📁 `{module_name}/`\n\n")
|
||||||
|
|
||||||
|
# Module metadata
|
||||||
|
if data['layers']:
|
||||||
|
layers_str = ", ".join(sorted(data['layers']))
|
||||||
|
f.write(f"{indent}- 🏗️ **Layers:** {layers_str}\n")
|
||||||
|
|
||||||
|
tiers_summary = []
|
||||||
|
for tier_name, count in data['tiers'].items():
|
||||||
|
if count > 0:
|
||||||
|
tiers_summary.append(f"{tier_name}: {count}")
|
||||||
|
if tiers_summary:
|
||||||
|
f.write(f"{indent}- 📊 **Tiers:** {', '.join(tiers_summary)}\n")
|
||||||
|
|
||||||
|
f.write(f"{indent}- 📄 **Files:** {len(data['files'])}\n")
|
||||||
|
f.write(f"{indent}- 📦 **Entities:** {len(data['entities'])}\n")
|
||||||
|
|
||||||
|
# List key entities (Modules, Classes, Components only)
|
||||||
|
key_entities = [e for e in data['entities'] if e.type in ['Module', 'Class', 'Component', 'Store']]
|
||||||
|
if key_entities:
|
||||||
|
f.write(f"\n{indent}**Key Entities:**\n\n")
|
||||||
|
for entity in sorted(key_entities, key=lambda x: (x.type, x.name))[:10]:
|
||||||
|
icon = "📦" if entity.type == "Module" else "ℂ" if entity.type == "Class" else "🧩" if entity.type == "Component" else "🗄️"
|
||||||
|
tier_badge = ""
|
||||||
|
if entity.get_tier() == Tier.CRITICAL:
|
||||||
|
tier_badge = " `[CRITICAL]`"
|
||||||
|
elif entity.get_tier() == Tier.TRIVIAL:
|
||||||
|
tier_badge = " `[TRIVIAL]`"
|
||||||
|
purpose = entity.tags.get('PURPOSE', '')[:60] + "..." if entity.tags.get('PURPOSE') and len(entity.tags.get('PURPOSE', '')) > 60 else entity.tags.get('PURPOSE', '')
|
||||||
|
f.write(f"{indent} - {icon} **{entity.name}** ({entity.type}){tier_badge}\n")
|
||||||
|
if purpose:
|
||||||
|
f.write(f"{indent} - {purpose}\n")
|
||||||
|
|
||||||
|
# External relations
|
||||||
|
external_relations = [r for r in data['relations'] if r['type'] in ['DEPENDS_ON', 'IMPLEMENTS', 'INHERITS']]
|
||||||
|
if external_relations:
|
||||||
|
unique_deps = {}
|
||||||
|
for rel in external_relations:
|
||||||
|
key = f"{rel['type']} -> {rel['target']}"
|
||||||
|
unique_deps[key] = rel
|
||||||
|
f.write(f"\n{indent}**Dependencies:**\n\n")
|
||||||
|
for rel_str in sorted(unique_deps.keys())[:5]:
|
||||||
|
f.write(f"{indent} - 🔗 {rel_str}\n")
|
||||||
|
|
||||||
|
f.write("\n")
|
||||||
|
|
||||||
|
# Cross-module dependency graph
|
||||||
|
f.write("## Cross-Module Dependencies\n\n")
|
||||||
|
f.write("```mermaid\n")
|
||||||
|
f.write("graph TD\n")
|
||||||
|
|
||||||
|
# Find inter-module dependencies
|
||||||
|
for module_path, data in sorted_modules:
|
||||||
|
module_name = module_path.split('/')[-1] if module_path != 'root' else 'root'
|
||||||
|
safe_name = module_name.replace('-', '_').replace('.', '_')
|
||||||
|
|
||||||
|
for rel in data['relations']:
|
||||||
|
target = rel.get('target', '')
|
||||||
|
# Check if target references another module
|
||||||
|
for other_module in modules:
|
||||||
|
if other_module != module_path and other_module in target:
|
||||||
|
other_name = other_module.split('/')[-1]
|
||||||
|
safe_other = other_name.replace('-', '_').replace('.', '_')
|
||||||
|
f.write(f" {safe_name}-->|{rel['type']}|{safe_other}\n")
|
||||||
|
break
|
||||||
|
|
||||||
|
f.write("```\n")
|
||||||
|
|
||||||
|
print(f"Generated {OUTPUT_MODULE_MAP_MD}")
|
||||||
|
# [/DEF:_generate_module_map:Function]
|
||||||
|
|
||||||
# [/DEF:SemanticMapGenerator:Class]
|
# [/DEF:SemanticMapGenerator:Class]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
15
ut
Normal file
15
ut
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
Prepended http:// to './RealiTLScanner'
|
||||||
|
--2026-02-20 11:14:59-- http://./RealiTLScanner
|
||||||
|
Распознаётся . (.)… ошибка: С именем узла не связано ни одного адреса.
|
||||||
|
wget: не удаётся разрешить адрес ‘.’
|
||||||
|
Prepended http:// to 'www.microsoft.com'
|
||||||
|
--2026-02-20 11:14:59-- http://www.microsoft.com/
|
||||||
|
Распознаётся www.microsoft.com (www.microsoft.com)… 95.100.178.81
|
||||||
|
Подключение к www.microsoft.com (www.microsoft.com)|95.100.178.81|:80... соединение установлено.
|
||||||
|
HTTP-запрос отправлен. Ожидание ответа… 403 Forbidden
|
||||||
|
2026-02-20 11:15:00 ОШИБКА 403: Forbidden.
|
||||||
|
|
||||||
|
Prepended http:// to 'file.csv'
|
||||||
|
--2026-02-20 11:15:00-- http://file.csv/
|
||||||
|
Распознаётся file.csv (file.csv)… ошибка: Неизвестное имя или служба.
|
||||||
|
wget: не удаётся разрешить адрес ‘file.csv’
|
||||||
Reference in New Issue
Block a user