tasks ready
This commit is contained in:
35
specs/022-sync-id-cross-filters/checklists/requirements.md
Normal file
35
specs/022-sync-id-cross-filters/checklists/requirements.md
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# Specification Quality Checklist: Service ID Synchronization and Cross-Filter Recovery
|
||||||
|
|
||||||
|
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
||||||
|
**Created**: 2026-02-25
|
||||||
|
**Feature**: [022-sync-id-cross-filters spec](../spec.md)
|
||||||
|
|
||||||
|
## Content Quality
|
||||||
|
|
||||||
|
- [x] No implementation details (languages, frameworks, APIs)
|
||||||
|
- [x] Focused on user value and business needs
|
||||||
|
- [x] Written for non-technical stakeholders
|
||||||
|
- [x] All mandatory sections completed
|
||||||
|
|
||||||
|
## Requirement Completeness
|
||||||
|
|
||||||
|
- [x] No [NEEDS CLARIFICATION] markers remain
|
||||||
|
- [x] Requirements are testable and unambiguous
|
||||||
|
- [x] Success criteria are measurable
|
||||||
|
- [x] Success criteria are technology-agnostic (no implementation details)
|
||||||
|
- [x] All acceptance scenarios are defined
|
||||||
|
- [x] Edge cases are identified
|
||||||
|
- [x] Scope is clearly bounded
|
||||||
|
- [x] Dependencies and assumptions identified
|
||||||
|
|
||||||
|
## Feature Readiness
|
||||||
|
|
||||||
|
- [x] All functional requirements have clear acceptance criteria
|
||||||
|
- [x] User scenarios cover primary flows
|
||||||
|
- [x] Feature meets measurable outcomes defined in Success Criteria
|
||||||
|
- [x] No implementation details leak into specification
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Feature spec meets all standard checks. It correctly captures business goals (restoring cross-filters transparently) while avoiding deep assumptions on the Superset backend code, besides standard ZIP import/export payload patching.
|
||||||
|
- Ready for `/speckit.plan` or `/speckit.tasks`.
|
||||||
64
specs/022-sync-id-cross-filters/contracts/api.md
Normal file
64
specs/022-sync-id-cross-filters/contracts/api.md
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
# API Contracts
|
||||||
|
|
||||||
|
## 1. Migration Payload Additions
|
||||||
|
|
||||||
|
The migration execution API endpoint (e.g. `POST /api/v1/migration/run` or similar) needs to accept a new boolean flag.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"source_environment_id": "env_abc",
|
||||||
|
"target_environment_id": "env_xyz",
|
||||||
|
"resource_uuids": ["1234-abcd-...", "5678-efgh-..."],
|
||||||
|
"fix_cross_filters": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Settings API Endpoints
|
||||||
|
|
||||||
|
The frontend requires REST endpoints to manage the service settings.
|
||||||
|
|
||||||
|
```json
|
||||||
|
// GET /api/v1/migration/settings
|
||||||
|
{
|
||||||
|
"sync_cron_schedule": "0 * * * *" // Example: every hour
|
||||||
|
}
|
||||||
|
|
||||||
|
// PUT /api/v1/migration/settings
|
||||||
|
{
|
||||||
|
"sync_cron_schedule": "*/30 * * * *"
|
||||||
|
}
|
||||||
|
|
||||||
|
// GET /api/v1/migration/mappings
|
||||||
|
// Returns paginated list of ResourceMapping records
|
||||||
|
{
|
||||||
|
"items": [
|
||||||
|
{
|
||||||
|
"environment_id": "prod-env-1",
|
||||||
|
"environment_name": "Production",
|
||||||
|
"resource_type": "chart",
|
||||||
|
"uuid": "123e4567-e89b-12d3-a456-426614174000",
|
||||||
|
"remote_integer_id": 42,
|
||||||
|
"resource_name": "Monthly Active Users",
|
||||||
|
"last_synced_at": "2026-02-25T10:00:00Z"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"total": 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. IdMappingService Internal Interface
|
||||||
|
|
||||||
|
```python
|
||||||
|
class IdMappingService:
|
||||||
|
def sync_environment(self, environment_id: str) -> None:
|
||||||
|
"""Fully synchronizes mapping for a specific environment."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_remote_id(self, environment_id: str, resource_type: str, uuid: str) -> int | None:
|
||||||
|
"""Retrieves the remote integer ID for a given universal UUID."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_remote_ids_batch(self, environment_id: str, resource_type: str, uuids: list[str]) -> dict[str, int]:
|
||||||
|
"""Retrieves remote integer IDs for a list of universal UUIDs."""
|
||||||
|
pass
|
||||||
|
```
|
||||||
24
specs/022-sync-id-cross-filters/data-model.md
Normal file
24
specs/022-sync-id-cross-filters/data-model.md
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# Phase 1 Data Model: Service ID Synchronization and Cross-Filter Recovery
|
||||||
|
|
||||||
|
## Entities
|
||||||
|
|
||||||
|
### `ResourceMapping`
|
||||||
|
|
||||||
|
Persists the discovered external integer IDs and maps them to universal UUIDs.
|
||||||
|
|
||||||
|
**Fields**:
|
||||||
|
- `id` (Primary Key, integer)
|
||||||
|
- `environment_id` (String, foreign key or reference to Environment configuration)
|
||||||
|
- `resource_type` (Enum: `chart`, `dataset`, `dashboard`)
|
||||||
|
- `uuid` (String, UUID format, unique per env + type)
|
||||||
|
- `remote_integer_id` (Integer)
|
||||||
|
- `resource_name` (String, human-readable name for UI display)
|
||||||
|
- `last_synced_at` (DateTime, UTC)
|
||||||
|
|
||||||
|
**Constraints**:
|
||||||
|
- Unique constraint on `(environment_id, resource_type, uuid)`
|
||||||
|
- Unique constraint on `(environment_id, resource_type, remote_integer_id)` (Optional, but generally true in Superset)
|
||||||
|
|
||||||
|
## Validations
|
||||||
|
- `remote_integer_id` must be > 0
|
||||||
|
- `environment_id` must be an active, valid registered environment
|
||||||
86
specs/022-sync-id-cross-filters/plan.md
Normal file
86
specs/022-sync-id-cross-filters/plan.md
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
# Implementation Plan: ID Synchronization and Cross-Filter Recovery
|
||||||
|
|
||||||
|
**Branch**: `022-sync-id-cross-filters` | **Date**: 2026-02-25 | **Spec**: [spec.md](./spec.md)
|
||||||
|
**Input**: Feature specification from `/specs/022-sync-id-cross-filters/spec.md`
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
This feature implements a mechanism to recover native cross-filtering links when migrating Apache Superset dashboards. It introduces an `IdMappingService` to track Superset's unstable `Integer IDs` across environments using stable `UUIDs`. During migration, a new `fix_cross_filters` option enables a workflow that intercepts the export ZIP, dynamically translates Source environment Integer IDs to Target environment Integer IDs in the Dashboard's `json_metadata`, and performs a dual-import sequence if the Chart objects don't yet exist on the target.
|
||||||
|
|
||||||
|
Additionally, it provides a Settings UI allowing administrators to configure the background synchronization interval via a Cron string and to inspect the internal mapping catalog, showing real names (`resource_name`) alongside their UUIDs and IDs across different environments.
|
||||||
|
|
||||||
|
## Technical Context
|
||||||
|
|
||||||
|
**Language/Version**: Python 3.11+
|
||||||
|
**Primary Dependencies**: FastAPI, Pydantic, APScheduler, SQLAlchemy (or existing SQLite interface)
|
||||||
|
**Storage**: Local `mappings.db` (SQLite)
|
||||||
|
**Testing**: `pytest`
|
||||||
|
**Target Platform**: Linux / Docker
|
||||||
|
**Project Type**: Backend Web Service (`ss-tools` API)
|
||||||
|
**Performance Goals**: ID translations must complete in < 5 seconds. Background sync must paginate smoothly to avoid overwhelming Target Superset APIs.
|
||||||
|
**Constraints**: Requires admin API access to Superset instances.
|
||||||
|
**Scale/Scope**: Supports hundreds of dashboards and charts per environment.
|
||||||
|
|
||||||
|
## Constitution Check
|
||||||
|
|
||||||
|
*GATE: Passed*
|
||||||
|
|
||||||
|
- **I. Library-First**: The `IdMappingService` will be strictly isolated as a core concept within `backend/src/core/mapping_service` (or similar). It does NOT depend on web-layer code.
|
||||||
|
- **II. CLI Interface**: N/A for this internal daemon, though its behavior is exposed over the API.
|
||||||
|
- **III. Test-First (NON-NEGOTIABLE)**: We will write targeted `pytest` unit tests for the ZIP manifest patching logic within `MigrationEngine` using the fixtures provided in the spec.
|
||||||
|
- **IV. Integration Testing**: A mock Superset API environment will be required to integration test the "Dual-Import" capability safely.
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
### Documentation (this feature)
|
||||||
|
|
||||||
|
```text
|
||||||
|
specs/022-sync-id-cross-filters/
|
||||||
|
├── plan.md # This file
|
||||||
|
├── research.md # Phase 0 output
|
||||||
|
├── data-model.md # Phase 1 output
|
||||||
|
├── quickstart.md # Phase 1 output
|
||||||
|
├── contracts/ # Phase 1 output
|
||||||
|
└── tasks.md # Phase 2 output (Pending)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Source Code
|
||||||
|
|
||||||
|
```text
|
||||||
|
backend/
|
||||||
|
├── src/
|
||||||
|
│ ├── core/
|
||||||
|
│ │ ├── migration_engine.py # (MODIFIED) Add YAML patch layer
|
||||||
|
│ │ └── mapping_service.py # (NEW) IdMappingService + scheduling
|
||||||
|
│ ├── models/
|
||||||
|
│ │ └── mappings.py # (NEW) SQLAlchemy/Pydantic models for ResourceMapping
|
||||||
|
│ └── api/
|
||||||
|
│ └── routes/
|
||||||
|
│ ├── migration.py # (MODIFIED) Add fix_cross_filters config flag
|
||||||
|
│ └── settings.py # (MODIFIED/NEW) Add cron config and mapping GET endpoints
|
||||||
|
└── tests/
|
||||||
|
└── core/
|
||||||
|
├── test_migration_engine.py # (NEW/MODIFIED)
|
||||||
|
└── test_mapping_service.py # (NEW)
|
||||||
|
|
||||||
|
frontend/
|
||||||
|
├── src/
|
||||||
|
│ ├── routes/
|
||||||
|
│ │ └── settings/ # (MODIFIED) Add UI for cron and mapping table
|
||||||
|
│ └── components/
|
||||||
|
│ └── mappings/
|
||||||
|
│ └── MappingTable.svelte # (NEW) Table to view current UUID <-> ID catalogs
|
||||||
|
```
|
||||||
|
|
||||||
|
**Structure Decision**: Extending the existing Backend FastApi structure. The logic touches API routing, the core `MigrationEngine`, and introduces a new standalone `mapping_service.py` to handle the scheduler and DB layer.
|
||||||
|
|
||||||
|
## Complexity Tracking
|
||||||
|
|
||||||
|
*No major constitutional violations.* The only added complexity is the "Dual-Import" mechanism in `MigrationEngine`, which is strictly necessary because Superset assigns IDs unconditionally on creation.
|
||||||
|
|
||||||
|
## Test Data Reference
|
||||||
|
|
||||||
|
| Component | TIER | Fixture Name | Location |
|
||||||
|
|-----------|------|--------------|----------|
|
||||||
|
| `IdMappingService` | CRITICAL | `resource_mapping_record` | spec.md#test-data-fixtures |
|
||||||
|
| `MigrationEngine` | CRITICAL | `missing_target_scenario` | spec.md#test-data-fixtures |
|
||||||
19
specs/022-sync-id-cross-filters/quickstart.md
Normal file
19
specs/022-sync-id-cross-filters/quickstart.md
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
# Quickstart: ID Synchronization
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
1. **Service Initialization**:
|
||||||
|
Ensure the `IdMappingService` scheduler is started when the backend application boots up.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# In app.py or similar startup logic
|
||||||
|
from core.mapping_service import IdMappingService
|
||||||
|
|
||||||
|
mapping_service = IdMappingService()
|
||||||
|
mapping_service.start_scheduler(interval_minutes=30)
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Running Migrations with Cross-Filter Sync**:
|
||||||
|
When calling the migration endpoint, set `fix_cross_filters=True`. The engine will automatically pause, do a technical import, sync IDs, patch the manifest, and do a final import.
|
||||||
|
|
||||||
|
*(No additional quickstart steps required as this logic is embedded directly into the migration engine).*
|
||||||
39
specs/022-sync-id-cross-filters/research.md
Normal file
39
specs/022-sync-id-cross-filters/research.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Phase 0 Research: Service ID Synchronization and Cross-Filter Recovery
|
||||||
|
|
||||||
|
## 1. Technical Context Clarifications
|
||||||
|
|
||||||
|
- **Language/Version**: Python 3.11+
|
||||||
|
- **Primary Dependencies**: FastAPI, SQLite (local mapping DB), APScheduler (background jobs)
|
||||||
|
- **Storage**: SQLite (mappings.db) or similar existing local store / ORM for storing `resource_mappings`.
|
||||||
|
- **Testing**: pytest (backend)
|
||||||
|
- **Target Platform**: Dockerized Linux service
|
||||||
|
|
||||||
|
## 2. Research Tasks & Unknowns
|
||||||
|
|
||||||
|
### Q: Where to store the environment mappings?
|
||||||
|
- **Decision**: A new SQLite table `resource_mappings` inside the existing backend database (or standard SQLAlchemy models if used).
|
||||||
|
- **Rationale**: Minimal infrastructure overhead. We already use an internal database (implied by `mappings.db` or SQLAlchemy models).
|
||||||
|
- **Alternatives considered**: Redis (too volatile, don't want to lose mapping on restart), separate PostgreSQL (overkill).
|
||||||
|
|
||||||
|
### Q: How to schedule the synchronization background job?
|
||||||
|
- **Decision**: Use `APScheduler` or FastAPI's `BackgroundTasks` triggered by a simple internal mechanism. If `APScheduler` is already in the project, we'll use that.
|
||||||
|
- **Rationale**: Standard Python ecosystem approach for periodic in-process background tasks.
|
||||||
|
- **Alternatives considered**: Celery (too heavy), cron container (requires external setup).
|
||||||
|
|
||||||
|
### Q: How to intercept the migration payload (ZIP) and patch it?
|
||||||
|
- **Decision**: In `MigrationEngine.migrate()`, before uploading to the Target Environment via `api_client`, intercept the downloaded `ZIP` bytes. Extract `dashboards/*.yaml`, parse YAML, find and replace `id` references in `json_metadata`, write back to YAML, repack the ZIP, and dispatch.
|
||||||
|
- **Rationale**: Dashboards export as ZIP files containing YAML manifests. `json_metadata` in these YAMLs contains the `chart_configuration` we need to patch.
|
||||||
|
- **Alternatives considered**: Patching via Superset API after import (harder, requires precise API calls and dealing with Superset's strict dashboard validation).
|
||||||
|
|
||||||
|
### Q: How to handle the "Double Import" strategy?
|
||||||
|
- **Decision**: If Target Check shows missing UUIDs, execute temporary import, trigger direct `IdMappingService.sync_environment(target_env)` to fetch newly created IDs, then proceed with the patching and final import with `overwrite=true`.
|
||||||
|
- **Rationale**: Superset generates IDs upon creation. We cannot guess them. We must force it to create them first.
|
||||||
|
- **Alternatives considered**: Creating charts via API manually one-by-one (loss of complex export logic, reinventing the importer).
|
||||||
|
|
||||||
|
### Q: Integrating the Checkbox into UI
|
||||||
|
- **Decision**: Pass `fix_cross_filters=True` boolean in the migration payload from Frontend to Backend.
|
||||||
|
- **Rationale**: Standard API contract extension.
|
||||||
|
|
||||||
|
## 3. Existing Code Integration
|
||||||
|
|
||||||
|
We need to hook into `backend/src/core/migration_engine.py`. This file likely handles the `export -> import` flow. We will add a pre-processing step right before the final `api_client.import_dashboards()` call on the target environment.
|
||||||
120
specs/022-sync-id-cross-filters/spec.md
Normal file
120
specs/022-sync-id-cross-filters/spec.md
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
# Feature Specification: ID Synchronization and Cross-Filter Recovery
|
||||||
|
|
||||||
|
**Feature Branch**: `022-sync-id-cross-filters`
|
||||||
|
**Reference UX**: N/A
|
||||||
|
**Created**: 2026-02-25
|
||||||
|
**Status**: Draft
|
||||||
|
**Input**: User description: "Техническое задание: Сервис синхронизации ID и восстановления кросс-фильтрации..."
|
||||||
|
|
||||||
|
## User Scenarios & Testing *(mandatory)*
|
||||||
|
|
||||||
|
### User Story 1 - Transparent Migration with Working Cross-Filters (Priority: P1)
|
||||||
|
|
||||||
|
As a user migrating a dashboard equipped with cross-filters, I want the system to automatically remap internal Chart and Dataset IDs to the target environment's IDs, so that cross-filtering works immediately after migration without manual reconstruction.
|
||||||
|
|
||||||
|
**Why this priority**: Core value proposition. Superset breaks cross-filter links during migration because integer IDs change; fixing this is the primary goal of this feature.
|
||||||
|
|
||||||
|
**Independent Test**: Can be fully tested by migrating a dashboard containing native cross-filters to a blank environment with "Fix Cross-Filters" checked, and verifying filters function natively on the target without opening them in edit mode.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** a dashboard with cross-filters pointing to Source IDs, **When** migrated to a Target environment where charts are new (UUIDs missing), **Then** the system transparently does a dual-import, fetches new IDs, patches the metadata, and imports again so the resulting dashboard uses correct Target IDs.
|
||||||
|
2. **Given** a dashboard with cross-filters where charts already exist on the Target, **When** migrated, **Then** the system looks up IDs locally and patches the dashboard metadata directly without an intermediary import.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 2 - UI Option for Cross-Filter Recovery (Priority: P2)
|
||||||
|
|
||||||
|
As a user initiating a migration, I want a clear option to toggle cross-filter repair ("Исправить связи кросс-фильтрации"), so I can bypass the double-import logic if I know cross-filters aren't used or if I require a strict, unmodified file import.
|
||||||
|
|
||||||
|
**Why this priority**: Gives users control over the migration pipeline and reduces target system load if patching isn't needed.
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by opening the migration modal/interface and verifying the checkbox state, label, and help description are correct.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the migration launch UI, **When** it opens, **Then** the "Исправить связи кросс-фильтрации" is visible, enabled by default, and has a descriptive tooltip.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 3 - Settings UI and Mapping Table (Priority: P2)
|
||||||
|
|
||||||
|
As a system administrator, I want to configure the synchronization schedule via Cron and view the current mapping table in the Settings UI so that I can monitor the agent's status and debug missing IDs.
|
||||||
|
|
||||||
|
**Why this priority**: Required for operational visibility and configuration flexibility.
|
||||||
|
|
||||||
|
**Independent Test**: Open the settings page, adjust the Cron schedule, and view the mapping table. Verify the table displays environment names, UUIDs, integer IDs, and resource names.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
1. **Given** the Settings UI, **When** navigated to the ID Mapping section, **Then** I see an input to configure the Cron string for the background job.
|
||||||
|
2. **Given** the ID Mapping section, **When** viewed, **Then** I see a table or list showing `Environment`, `UUID`, `Integer ID`, and `Resource Name` for all synchronized objects.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- What happens when the background sync process fails to reach a specific Environment API (e.g., due to network issues or authentication)?
|
||||||
|
- How does the system handle migrating dashboards where some UUIDs map properly while others don't (partial existence footprint)?
|
||||||
|
- What happens if the intermediary 'Technical Import' step fails (e.g., due to duplicate names or target environment constraints)?
|
||||||
|
|
||||||
|
## Requirements *(mandatory)*
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: System MUST maintain a scheduled background synchronization process to poll all active Environments, configurable via a Cron expression in the UI.
|
||||||
|
- **FR-002**: System MUST catalog Chart, Dataset, and Dashboard objects from each Environment, mapping their universal UUIDs to environment-specific IDs and capturing their human-readable names.
|
||||||
|
- **FR-003**: System MUST persist this identity catalog locally for rapid access.
|
||||||
|
- **FR-004**: System MUST provide a pre-migration verification mechanism to determine if required Target IDs are already known in the local catalog.
|
||||||
|
- **FR-005**: System MUST perform an intermediary "Technical Import" and immediate re-synchronization when migrating objects to a fresh environment, effectively forcing the Target system to generate and reveal new IDs.
|
||||||
|
- **FR-006**: System MUST patch the dashboard configuration metadata before the final import step, ensuring all cross-filter associations exclusively use the correct Target IDs.
|
||||||
|
- **FR-007**: System MUST provide a user interface option to explicitly toggle the Cross-Filter recovery feature during migration execution.
|
||||||
|
- **FR-008**: System MUST provide a Settings UI view displaying the universal UUID, the environment-specific integer ID, the environment name, and the resource name for all cataloged objects.
|
||||||
|
|
||||||
|
### Key Entities
|
||||||
|
|
||||||
|
- **Resource Mapping**: Represents the unified reference between an Environment, a Resource Type (Chart/Dataset/Dashboard), its universal UUID, its environment-specific Integer ID, and its last sync timestamp.
|
||||||
|
|
||||||
|
## Success Criteria *(mandatory)*
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: 100% of native cross-filters on migrated dashboards continue to function perfectly on the Target environment without manual user intervention.
|
||||||
|
- **SC-002**: Background sync completes across a minimum of 3 standard environments in under 5 minutes without causing API rate-limiting.
|
||||||
|
- **SC-003**: A single "Fix Cross-Filters" migration takes no more than 2x the execution time of a standard migration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Data Fixtures *(recommended for CRITICAL components)*
|
||||||
|
|
||||||
|
### Fixtures
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resource_mapping_record:
|
||||||
|
description: "Example state of the mapping database after a successful sync step"
|
||||||
|
data:
|
||||||
|
environment_id: "prod-env-1"
|
||||||
|
resource_type: "chart"
|
||||||
|
uuid: "123e4567-e89b-12d3-a456-426614174000"
|
||||||
|
remote_integer_id: 42
|
||||||
|
last_synced_at: "2026-02-25T10:00:00Z"
|
||||||
|
# UI settings table and Cron schedule requirements
|
||||||
|
# This section describes the data structure for resource mapping and sync process
|
||||||
|
# It's not a fixture itself, but rather a description of the data collected.
|
||||||
|
# For UI settings and Cron, separate fixtures or requirements would be needed.
|
||||||
|
# The following is a description of the data points collected during sync:
|
||||||
|
# 1. Извлечение (Extraction):
|
||||||
|
# * **Charts:** `id`, `uuid`, `slice_name`
|
||||||
|
# * **Datasets:** `id`, `uuid`, `table_name`
|
||||||
|
# * **Dashboards:** `id`, `uuid`, `slug`
|
||||||
|
# 2. Сохранение (Upsert): Сохраняет данные в локальную БД сервиса миграции в таблицу маппинга.
|
||||||
|
# * *Ключ уникальности:* `environment_id` + `uuid`.
|
||||||
|
# * *Обновляемое поле:* `remote_integer_id` (актуальный ID на конкретном стенде) и `resource_name` (для отображения в UI).
|
||||||
|
# * *Метаданные:* `last_synced_at`.
|
||||||
|
|
||||||
|
missing_target_scenario:
|
||||||
|
description: "Target check indicates partial or entirely missing IDs"
|
||||||
|
data:
|
||||||
|
all_uuids_present: false
|
||||||
|
missing_uuids:
|
||||||
|
- "123e4567-e89b-12d3-a456-426614174000"
|
||||||
|
```
|
||||||
105
specs/022-sync-id-cross-filters/tasks.md
Normal file
105
specs/022-sync-id-cross-filters/tasks.md
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
# Tasks: ID Synchronization and Cross-Filter Recovery
|
||||||
|
|
||||||
|
**Input**: Design documents from `/specs/022-sync-id-cross-filters/`
|
||||||
|
**Prerequisites**: plan.md, spec.md, research.md, data-model.md, contracts/api.md
|
||||||
|
|
||||||
|
## Format: `[ID] [P?] [Story] Description`
|
||||||
|
- **[P]**: Can run in parallel (different files, no dependencies)
|
||||||
|
- **[Story]**: Which user story this task belongs to (e.g., US1, US2)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1: Setup (Shared Infrastructure)
|
||||||
|
|
||||||
|
**Purpose**: Project initialization and basic structure.
|
||||||
|
|
||||||
|
- [x] T001 Initialize database/ORM mapping models structure for the new service in `backend/src/models/mappings.py`
|
||||||
|
- [x] T002 Configure APScheduler scaffolding in `backend/src/core/mapping_service.py`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Foundational (Blocking Prerequisites)
|
||||||
|
|
||||||
|
**Purpose**: Core logic for the `IdMappingService` that MUST be complete before we can use it in migrations.
|
||||||
|
|
||||||
|
- [x] T003 Create `ResourceMapping` SQLAlchemy or SQLite model in `backend/src/models/mappings.py` (Include `resource_name` field)
|
||||||
|
- [x] T004 Implement `IdMappingService.sync_environment(env_id)` logic in `backend/src/core/mapping_service.py`
|
||||||
|
- [x] T005 Implement `IdMappingService.get_remote_ids_batch(...)` logic in `backend/src/core/mapping_service.py`
|
||||||
|
- [x] T006 Write unit tests for `IdMappingService` in `backend/tests/core/test_mapping_service.py` using `resource_mapping_record` fixture.
|
||||||
|
|
||||||
|
**Checkpoint**: Foundation ready - ID cataloging works independently.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3: User Story 1 - Transparent Migration with Working Cross-Filters (Priority: P1) 🎯 MVP
|
||||||
|
|
||||||
|
**Goal**: As a user migrating a dashboard equipped with cross-filters, I want the system to automatically remap internal Chart and Dataset IDs to the target environment's IDs.
|
||||||
|
|
||||||
|
### Tests for User Story 1 ⚠️
|
||||||
|
- [x] T007 [P] [US1] Create unit tests for YAML patching in `backend/tests/core/test_migration_engine.py` using `missing_target_scenario` fixture.
|
||||||
|
|
||||||
|
### Implementation for User Story 1
|
||||||
|
- [x] T008 [US1] Inject `IdMappingService` dependency into `MigrationEngine` in `backend/src/core/migration_engine.py`
|
||||||
|
- [x] T009 [US1] Implement `MigrationEngine._patch_dashboard_metadata(...)` to rewrite integer IDs in `json_metadata`.
|
||||||
|
- [x] T010 [US1] Modify `MigrationEngine` to perform "Target Check" against IDs.
|
||||||
|
- [x] T011 [US1] Modify `MigrationEngine` to execute "Dual-Import" sequence if Target Check finds missing IDs.
|
||||||
|
|
||||||
|
**Checkpoint**: At this point, User Story 1 should be fully functional and testable via backend scripts/API calls.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: User Story 2 - UI Option for Cross-Filter Recovery (Priority: P2)
|
||||||
|
|
||||||
|
**Goal**: As a user initiating a migration, I want a clear option to toggle cross-filter repair.
|
||||||
|
|
||||||
|
### Implementation for User Story 2
|
||||||
|
- [x] T012 [US2] Modify `backend/src/api/routes/migration.py` and `DashboardSelection` to accept `fix_cross_filters` boolean (default: True).
|
||||||
|
- [x] T013 [US2] Modify frontend `migration/+page.svelte` to include "Исправить связи кросс-фильтрации" checkbox and pass it in API POST payload.
|
||||||
|
|
||||||
|
**Checkpoint**: User controls whether cross-filter patching occurs.
|
||||||
|
- [x] T014 [US2] Add checkbox to the Migration launch modal in the Frontend component (assuming `frontend/src/components/MigrationModal.svelte` or similar).
|
||||||
|
|
||||||
|
**Checkpoint**: At this point, User Story 1 and 2 should both work end-to-end.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 5: User Story 3 - Settings UI and Mapping Table (Priority: P2)
|
||||||
|
|
||||||
|
**Goal**: As a system administrator, I want to configure the synchronization schedule via Cron and view the current mapping table in the Settings UI.
|
||||||
|
|
||||||
|
### Implementation for User Story 3
|
||||||
|
- [x] T015 [P] [US3] Implement `GET /api/v1/migration/settings` and `PUT /api/v1/migration/settings` for Cron in `backend/src/api/routes/migration.py`
|
||||||
|
- [x] T016 [P] [US3] Implement `GET /api/v1/migration/mappings-data` endpoint for the table in `backend/src/api/routes/migration.py`
|
||||||
|
- [x] T017 [US3] Create Mapping UI directly in Frontend `frontend/src/routes/settings/+page.svelte`
|
||||||
|
- [x] T018 [US3] Add Cron configuration input and Mapping Table to the Settings page in `frontend/src/routes/settings/+page.svelte`
|
||||||
|
- [x] T019 [US3] Ensure `IdMappingService` scheduler can be restarted/reconfigured.
|
||||||
|
|
||||||
|
**Checkpoint**: At this point, User Story 3 should be fully functional and the background sync interval can be tuned visually.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase N: Polish & Cross-Cutting Concerns
|
||||||
|
|
||||||
|
**Purpose**: Improvements that affect multiple user stories
|
||||||
|
|
||||||
|
- [ ] T020 Verify error handling if "Technical Import" step fails.
|
||||||
|
- [ ] T021 Add debug logging using Molecular Topology (`[EXPLORE]`, `[REASON]`, `[REFLECT]`) to the mapping and patching processes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dependencies & Execution Order
|
||||||
|
|
||||||
|
### Phase Dependencies
|
||||||
|
- **Setup (Phase 1)**: No dependencies - can start immediately
|
||||||
|
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
|
||||||
|
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
|
||||||
|
- **Polish (Final Phase)**: Depends on all desired user stories being complete
|
||||||
|
|
||||||
|
### User Story Dependencies
|
||||||
|
- **User Story 1 (P1)**: Can start after Foundational (Phase 2)
|
||||||
|
- **User Story 2 (P2)**: Can run in parallel with US1 backend work.
|
||||||
|
- **User Story 3 (P3)**: Depends on Foundational (Phase 2), specifically the `ResourceMapping` table being populated, but can run parallel to US1/US2.
|
||||||
|
|
||||||
|
### Parallel Opportunities
|
||||||
|
- Frontend UI work (T014, T017, T018) can be done in parallel with backend engine work (T008-T011, T015-T016)
|
||||||
|
- API route updates can be done in parallel.
|
||||||
Reference in New Issue
Block a user