diff --git a/CHANGELOG.md b/CHANGELOG.md index 98ccd54..8feecd6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added +- **Packages commands** (PRD §6.3, PRD-v2 §P1.8, task 27): nine command handlers wired to the new `PackageRepository` and the existing `CredentialStore` via a `with_package_repo` builder on the `CommandBus`. `create_package(name, source_type, folder_path?)` generates a UUID v4 id, validates the trimmed name is non-empty, persists the aggregate and emits `DomainEvent::PackageCreated`. `update_package(id, PackagePatch)` applies a partial mutation (rename / folder / priority / auto_extract) — `folder_path` accepts `Some(Some(path))` to set, `Some(None)` to clear, `None` to leave untouched, so the frontend can distinguish "set to empty" from "unchanged". `delete_package(id, delete_downloads)` runs in two cascade modes: `false` (default) detaches every member via `PackageRepository::detach_download` so the downloads survive as standalone rows, `true` removes each member through the existing `RemoveDownloadCommand` (deletes engine state, files, and the SQLite row) before dropping the package row; the keyring entry under `vortex.package.` is best-effort cleaned in both cases. `set_package_password(id, Option)` stores the secret in the OS keyring via `CredentialStore::store("vortex.package.", …)` and only writes the keyring service key (never the plaintext) onto the `packages.password` SQLite column as a marker; passing `None` clears both the keyring entry and the marker idempotently, and an explicit empty string is rejected as a validation error so callers cannot ambiguously "clear by emptying". `set_package_priority(id, priority)` validates the value through the domain `Priority` aggregate up-front (so a bad input never produces partial cascade state), persists the new value on the package row and then loops through every member returned by `list_downloads` to update each download's `priority` and emit a `DownloadPrioritySet` event per child — dangling FK members (download row missing) are skipped with a debug log instead of aborting the cascade. `move_package_to_folder(id, new_folder)` updates the package row's `folder_path` and re-uses task 13's `ChangeDirectoryCommand` for each member; per-child failures are collected into a `PackageMoveOutcome { moved, failed }` and surfaced to the frontend so partial failures don't roll back the package update. `toggle_package_auto_extract(id)` flips the flag and returns the new state. `add_download_to_package(package_id, download_id)` and `remove_download_from_package(package_id, download_id)` set / clear the FK on `downloads.package_id` via the new `attach_download` / `detach_download` trait methods; both validate the package exists first so the IPC layer surfaces a clean `NotFound` for stale callers, and `attach_download` also requires the download to exist (re-attaching is idempotent). The `PackageRepository` trait gains `attach_download(&PackageId, DownloadId) -> Result<(), DomainError>` (returns `NotFound` when the download row is missing) and `detach_download(DownloadId) -> Result<(), DomainError>` (idempotent, no-op on missing row); the `SqlitePackageRepo` adapter implements both via raw `UPDATE downloads SET package_id = ? WHERE id = ?` so the FK singleton semantics match the existing `ON DELETE SET NULL` migration. Two new `DomainEvent` variants — `PackageUpdated { id }` (rename / folder / priority / password / auto_extract / membership change) and `PackageDeleted { id, delete_downloads }` (the flag mirrors the command so subscribers distinguish "package detached, downloads kept" from "everything gone" without re-reading the repo) — are forwarded by the Tauri bridge as `package-updated` and `package-deleted` (camelCase `deleteDownloads`). Nine Tauri IPC commands (`package_create`, `package_update`, `package_delete`, `package_set_password`, `package_set_priority`, `package_move_to_folder`, `package_toggle_auto_extract`, `package_add_download`, `package_remove_download`) registered in `invoke_handler!` and re-exported from `lib.rs`, with a new `PackagePatchDto` deserialiser whose `folder_path: Option>` round-trips the three-state semantics from the frontend. The runtime now wires `SqlitePackageRepo` into the `CommandBus` via `with_package_repo`. Forty-three new unit tests against `InMemoryPackageRepo` / `InMemoryDownloadRepo` / `InMemoryCredentialStore` mocks cover every acceptance criterion: CRUD round-trip, cascade-delete vs detach, keyring-only password storage (the `packages.password` column never holds the plaintext), per-child `DownloadPrioritySet` cascade with an explicit count assertion, partial-failure outcome shape on bulk move, idempotent attach/detach, dangling-FK skip on the priority cascade, and validation paths for blank names, empty-string passwords, invalid priorities, missing repos, and unknown ids. Adapter coverage hovers at 95-99 % per file (well above the 85 % threshold). Five SQLite-level tests pin the new attach/detach semantics on a real in-memory DB. Unblocks task 29 (Vue Packages React). - **Packages persistence** (PRD §6.3, PRD-v2 §P1.7, task 26): SQLite `packages` table (migration `m20260429_000007`) with the schema mandated by PRD-v2 §8 P1 — `id TEXT PRIMARY KEY`, `name`, `source_type` (`container` / `playlist` / `manual` / `split_archive`), nullable `folder_path`, nullable `password` (keyring ref), `auto_extract` (default `1`), `priority` (default `5`), `created_at`. The legacy stub `packages` table from migration 1 (BIGINT id, name only, never wired) is dropped and recreated. The migration also adds `downloads.package_id TEXT REFERENCES packages(id) ON DELETE SET NULL` plus the `idx_downloads_package` index, so deleting a package detaches its members without losing the rows. New `PackageRepository` driven port (`save` / `find_by_id` / `list` / `delete` / `list_downloads`) and `SqlitePackageRepo` adapter with sea-orm entity + `from_domain` / `into_domain` converters. Upserts preserve the original `created_at` so list ordering stays stable across re-saves; `list` orders by `(created_at asc, id asc)`; `list_downloads` orders by `queue_position asc, id asc` so the caller surfaces members in scheduling order. Domain `Package` aggregate gained the new persisted fields plus a `PackageId(String)` typed wrapper and a `PackageSourceType` enum (round-trips via `Display` / `FromStr`); `download_ids` stays in-memory (the FK on `downloads.package_id` is the source of truth on disk). `DomainEvent::PackageCreated.id` switches from `u64` to `PackageId` to match. Twenty-one new unit tests cover the four acceptance criteria (fresh + existing-DB migration, FK `ON DELETE SET NULL` semantics, full-field round-trip, ≥85 % adapter coverage), plus error paths (unknown `source_type`, priority overflow, `created_at` overflow), source-type round-trip per variant, optional fields persisting as `NULL`, `list_downloads` filtering and ordering, and the `InMemoryPackageRepository` mock used by future command / query handlers. Unblocks tasks 27 (Commands Packages), 28 (Queries Packages), 30 (auto-grouping playlist) and 31 (auto-grouping split archives). - **Account rotation on quota** (PRD §6.4, PRD-v2 §P1.6, task 25): new `AccountRotator` application service detects quota exhaustion (HTTP `429` or `traffic_left` below a caller-supplied threshold via `is_quota_signal`), pulls the offending account out of rotation for a hoster-specific cooldown via `mark_exhausted(account_id, service_name, ttl_secs)`, and asks the existing `AccountSelector` for the next best candidate via `next_account(service, strategy) -> NextAccountOutcome`. The outcome enum distinguishes three caller-actionable states: `Picked(Account)` (use the credential), `NoneAvailable` (no enabled / non-expired account configured — fall back to the free path or surface a UI hint), and `AllExhausted { next_eligible_at_ms }` (every eligible account is on cooldown — stall the download in `Waiting` until the earliest deadline so the scheduler can retry without busy-looping). `NextAccountOutcome::error_message(service_name)` returns the PRD §6.4 standard wording (`"All accounts exhausted for {service}"` / `"No account available for {service}"`) so callers attaching the error to `Download.error` stay uniform across hosters. Cooldown lifecycle: `record_traffic_refresh(account_id, traffic_left, threshold)` clears the marker only when the upstream confirms `traffic_left >= threshold` (a `None` observation or below-threshold value leaves the marker in place so a hoster without a traffic counter cannot silently undo every `mark_exhausted`); `clear_exhausted(account_id)` is the explicit reset path, idempotent for unknown ids; expired entries are pruned lazily on the next `next_account` call so no background sweeper is needed. The exhaustion map sits behind a `std::sync::Mutex` in `AccountRotator` (intentionally NOT persisted in SQLite — a process restart wipes the cooldown, which is the desired behaviour for the 5-to-15-minute hoster reset window); a poisoned mutex surfaces as `AppError::Validation("exhausted accounts mutex poisoned")` so callers can distinguish "no candidate" from "internal state corrupted", matching `AccountSelector::pick_round_robin`'s contract. The `AllExhausted` deadline restricts its scan to accounts that actually belong to the queried service so a parallel-service entry cannot leak its cooldown into an unrelated answer. New `AccountSelector::select_best_excluding(service, strategy, exclude_ids)` extends the existing `select_best` with an exclude list (no caching, no behaviour change for empty `exclude`); the prior signature is now a thin wrapper. New `DomainEvent::AccountExhausted { id, service_name, exhausted_until_ms }` forwarded by the Tauri bridge as `account-exhausted` (camelCase `exhaustedUntilMs`). New transient `Account::exhausted_until: Option` field with `mark_exhausted` / `clear_exhausted` / `is_exhausted(now_ms)` / `exhausted_until()` methods — the field is reset to `None` by `Account::reconstruct` so the rotator's in-memory map remains the single source of truth even though SQLite roundtrips drop the marker. New `CommandBus::with_account_rotator` / `account_rotator()` builder & accessor wires the rotator alongside the existing `AccountSelector`. Twenty-two new unit tests cover the four acceptance criteria (`429 → next account`, `all exhausted → AllExhausted with earliest deadline`, `traffic-refresh clears cooldown when above threshold`, full rotator + selector-exclude integration), plus edge cases: zero-TTL no-op, deadline-exclusive equality, cross-service deadline isolation, `None`-traffic refresh keeps cooldown, `404` / `500` ignored by `is_quota_signal`, threshold-equality below-but-not-above, idempotent `clear_exhausted`, lazy cooldown expiry surfaces an account back into rotation. Unblocks task 38 (vortex-mod-1fichier free + premium) which is the first hoster to wire the rotation flow. - **Account auto-selection** (PRD §6.4, PRD-v2 §P1.5, task 24): new `AccountSelector` application service picks the best `Account` per service for the live `AppConfig::account_selection_strategy`. Three strategies: `BestTraffic` (default, ranks `enabled → not expired → most traffic_left → most recent last_validated → smallest id` with `Unlimited` traffic ranking above any finite value), `RoundRobin` (per-service cursor over enabled non-expired candidates ordered by id; a poisoned cursor mutex now surfaces as `AppError::Validation("round-robin cursor mutex poisoned")` so it stays distinguishable from "no eligible account"), and `Manual` (fallback alias of `BestTraffic` until pinning UI lands). The selector reads `AccountRepository::list_by_service` on every call instead of caching: the previous event-driven invalidation could read stale rows when `select_best` landed between `bus.publish(AccountUpdated)` and the spawned `TokioEventBus` subscriber firing. New `CommandBus::resolve_account_for(service_name)` exposes the selector to download / link-grabber flows; failures from `ConfigStore::get_config()` propagate via `?` instead of being swallowed by a default-strategy fallback. New `DomainEvent::NoAccountAvailable { service_name }` (emitted when no candidate passes the filter) and `DomainEvent::AccountSelected { id, service_name, strategy }` (emitted whenever a pick is made), both forwarded by the Tauri bridge as `no-account-available` / `account-selected`. New `account_selection_strategy` field on `AppConfig` / `ConfigPatch` / `apply_patch` plus the matching IPC and TOML serialisation paths (snake_case `"best_traffic" | "round_robin" | "manual"`). The IPC layer rejects unknown strategy values: `ConfigPatchDto` → `ConfigPatch` is `TryFrom` and `settings_update` surfaces `invalid account selection strategy: …` instead of silently ignoring a typo. The TOML store mirrors the rule: `ConfigDto` → `AppConfig` is also `TryFrom`, so a hand-edited `config.toml` carrying an unknown strategy value now fails fast with `StorageError("invalid config: …")` instead of silently coercing to `best_traffic`. Backward compat is preserved: a legacy `config.toml` written before this field existed deserializes the missing key as the empty string via `#[serde(default)]`, and that empty case is treated as `BestTraffic` so an upgrade does not break startup. Eighteen unit tests cover the four acceptance criteria (3-account scenario, all-expired surface, comparative ranking table, round-robin alternance), repo-fresh selection, poisoned-cursor surfacing, IPC rejection of unknown strategies, TOML-store rejection of unknown persisted strategies, legacy-config (missing strategy field) backward compat, and config-error propagation. Unblocks task 25 (rotation auto sur quota). diff --git a/src-tauri/src/adapters/driven/event/tauri_bridge.rs b/src-tauri/src/adapters/driven/event/tauri_bridge.rs index d3dc3d1..ce3d0b7 100644 --- a/src-tauri/src/adapters/driven/event/tauri_bridge.rs +++ b/src-tauri/src/adapters/driven/event/tauri_bridge.rs @@ -54,6 +54,8 @@ fn event_name(event: &DomainEvent) -> &'static str { DomainEvent::PluginLoaded { .. } => "plugin-loaded", DomainEvent::PluginUnloaded { .. } => "plugin-unloaded", DomainEvent::PackageCreated { .. } => "package-created", + DomainEvent::PackageUpdated { .. } => "package-updated", + DomainEvent::PackageDeleted { .. } => "package-deleted", DomainEvent::ClipboardUrlDetected { .. } => "clipboard-url-detected", DomainEvent::SettingsUpdated => "settings-updated", DomainEvent::ChecksumVerified { .. } => "checksum-verified", @@ -146,6 +148,11 @@ fn event_payload(event: &DomainEvent) -> serde_json::Value { } DomainEvent::PluginUnloaded { name } => json!({ "name": name }), DomainEvent::PackageCreated { id, name } => json!({ "id": id.to_string(), "name": name }), + DomainEvent::PackageUpdated { id } => json!({ "id": id.to_string() }), + DomainEvent::PackageDeleted { + id, + delete_downloads, + } => json!({ "id": id.to_string(), "deleteDownloads": delete_downloads }), DomainEvent::ClipboardUrlDetected { urls } => json!({ "urls": urls }), DomainEvent::SettingsUpdated => json!({}), DomainEvent::ChecksumVerified { diff --git a/src-tauri/src/adapters/driven/logging/download_log_bridge.rs b/src-tauri/src/adapters/driven/logging/download_log_bridge.rs index 955007e..09e3f46 100644 --- a/src-tauri/src/adapters/driven/logging/download_log_bridge.rs +++ b/src-tauri/src/adapters/driven/logging/download_log_bridge.rs @@ -129,6 +129,8 @@ fn record_download_event(store: &DownloadLogStore, event: &DomainEvent) { | DomainEvent::PluginLoaded { .. } | DomainEvent::PluginUnloaded { .. } | DomainEvent::PackageCreated { .. } + | DomainEvent::PackageUpdated { .. } + | DomainEvent::PackageDeleted { .. } | DomainEvent::ClipboardUrlDetected { .. } | DomainEvent::SettingsUpdated | DomainEvent::AccountAdded { .. } diff --git a/src-tauri/src/adapters/driven/sqlite/package_repo.rs b/src-tauri/src/adapters/driven/sqlite/package_repo.rs index 23b47d1..066c80d 100644 --- a/src-tauri/src/adapters/driven/sqlite/package_repo.rs +++ b/src-tauri/src/adapters/driven/sqlite/package_repo.rs @@ -115,6 +115,79 @@ impl PackageRepository for SqlitePackageRepo { .collect() }) } + + fn attach_download( + &self, + package_id: &PackageId, + download_id: DownloadId, + ) -> Result<(), DomainError> { + use sea_orm::{ConnectionTrait, Statement}; + + let pkg_id = package_id.as_str().to_string(); + let dl_id = download_id.0 as i64; + block_on(async { + let result = self + .db + .execute(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Sqlite, + "UPDATE downloads SET package_id = ? WHERE id = ?", + [pkg_id.into(), dl_id.into()], + )) + .await + .map_err(map_db_err)?; + if result.rows_affected() == 0 { + return Err(DomainError::NotFound(format!( + "Download {} not found", + download_id.0 + ))); + } + Ok(()) + }) + } + + fn detach_download(&self, download_id: DownloadId) -> Result<(), DomainError> { + use sea_orm::{ConnectionTrait, Statement}; + + let dl_id = download_id.0 as i64; + block_on(async { + self.db + .execute(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Sqlite, + "UPDATE downloads SET package_id = NULL WHERE id = ?", + [dl_id.into()], + )) + .await + .map_err(map_db_err)?; + Ok(()) + }) + } + + fn find_package_of_download( + &self, + download_id: DownloadId, + ) -> Result, DomainError> { + use sea_orm::{ConnectionTrait, Statement}; + + let dl_id = download_id.0 as i64; + block_on(async { + let row = self + .db + .query_one(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Sqlite, + "SELECT package_id FROM downloads WHERE id = ?", + [dl_id.into()], + )) + .await + .map_err(map_db_err)?; + match row { + None => Ok(None), + Some(row) => { + let raw: Option = row.try_get("", "package_id").map_err(map_db_err)?; + Ok(raw.map(PackageId::new)) + } + } + }) + } } #[cfg(test)] @@ -505,6 +578,104 @@ mod tests { ); } + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_attach_download_sets_package_id_on_existing_row() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db.clone()); + repo.save(&make_package("pkg-att", "Att", PackageSourceType::Manual)) + .expect("save"); + // Seed a free-standing download (no package). + insert_download_in_package(&db, 42, 0, None).await; + + repo.attach_download(&PackageId::new("pkg-att"), DownloadId(42)) + .expect("attach"); + + let members = repo.list_downloads(&PackageId::new("pkg-att")).unwrap(); + assert_eq!(members, vec![DownloadId(42)]); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_attach_download_returns_not_found_when_download_missing() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + repo.save(&make_package("pkg-att", "Att", PackageSourceType::Manual)) + .expect("save"); + let err = repo + .attach_download(&PackageId::new("pkg-att"), DownloadId(999)) + .expect_err("missing download must error"); + assert!(matches!(err, DomainError::NotFound(_))); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_attach_download_idempotent_when_already_attached() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db.clone()); + repo.save(&make_package("pkg-att", "Att", PackageSourceType::Manual)) + .expect("save"); + insert_download_in_package(&db, 7, 0, Some("pkg-att")).await; + repo.attach_download(&PackageId::new("pkg-att"), DownloadId(7)) + .expect("idempotent"); + let members = repo.list_downloads(&PackageId::new("pkg-att")).unwrap(); + assert_eq!(members, vec![DownloadId(7)]); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_detach_download_clears_package_id() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db.clone()); + repo.save(&make_package("pkg-det", "Det", PackageSourceType::Manual)) + .expect("save"); + insert_download_in_package(&db, 11, 0, Some("pkg-det")).await; + repo.detach_download(DownloadId(11)).expect("detach"); + let members = repo.list_downloads(&PackageId::new("pkg-det")).unwrap(); + assert!(members.is_empty()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_detach_download_unknown_id_is_noop() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + // Unknown download id must succeed silently (idempotent). + repo.detach_download(DownloadId(9999)) + .expect("noop on unknown"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_of_download_returns_owner_when_attached() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db.clone()); + repo.save(&make_package("pkg-find", "F", PackageSourceType::Manual)) + .expect("save"); + insert_download_in_package(&db, 21, 0, Some("pkg-find")).await; + + let owner = repo + .find_package_of_download(DownloadId(21)) + .expect("query"); + assert_eq!(owner, Some(PackageId::new("pkg-find"))); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_of_download_returns_none_when_loose() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db.clone()); + insert_download_in_package(&db, 22, 0, None).await; + + let owner = repo + .find_package_of_download(DownloadId(22)) + .expect("query"); + assert!(owner.is_none(), "loose download has no owning package"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_of_download_returns_none_when_row_missing() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + let owner = repo + .find_package_of_download(DownloadId(404)) + .expect("query"); + assert!(owner.is_none(), "missing download row treated as loose"); + } + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_optional_fields_persist_as_null_when_unset() { let db = setup_test_db().await.expect("test db"); diff --git a/src-tauri/src/adapters/driving/tauri_ipc.rs b/src-tauri/src/adapters/driving/tauri_ipc.rs index 635eff7..ebd9df5 100644 --- a/src-tauri/src/adapters/driving/tauri_ipc.rs +++ b/src-tauri/src/adapters/driving/tauri_ipc.rs @@ -13,19 +13,22 @@ use crate::adapters::driven::logging::download_log_store::DownloadLogStore; use crate::application::command_bus::CommandBus; use crate::application::commands::store_install::{StoreInstallCommand, StoreUpdateCommand}; use crate::application::commands::{ - AccountPatch, AddAccountCommand, CancelDownloadCommand, ChangeDirectoryBulkCommand, - ChangeDirectoryBulkOutcome, ChangeDirectoryCommand, ChangeDirectoryFailure, - ClearDownloadsByStateCommand, ClearHistoryCommand, DeleteAccountCommand, - DeleteHistoryEntryCommand, DisablePluginCommand, EnablePluginCommand, ExportAccountsCommand, - ExportAccountsOutcome, ExportHistoryCommand, ExportHistoryFormat, ImportAccountsCommand, - ImportAccountsOutcome, InstallPluginCommand, MoveToBottomCommand, MoveToTopCommand, - OpenDownloadFileCommand, OpenDownloadFolderCommand, PauseAllDownloadsCommand, - PauseDownloadCommand, PurgeHistoryCommand, RedownloadCommand, RedownloadSource, - RemoveDownloadCommand, ReorderQueueCommand, ReportBrokenPluginCommand, ResolveLinksCommand, - ResolvedLinkDto, ResumeAllDownloadsCommand, ResumeDownloadCommand, RetryDownloadCommand, - SetPriorityCommand, StartDownloadCommand, UninstallPluginCommand, UpdateAccountCommand, - UpdateConfigCommand, UpdatePluginConfigCommand, ValidateAccountCommand, ValidationOutcomeDto, - VerifyChecksumCommand, VerifyChecksumOutcome, + AccountPatch, AddAccountCommand, AddDownloadToPackageCommand, CancelDownloadCommand, + ChangeDirectoryBulkCommand, ChangeDirectoryBulkOutcome, ChangeDirectoryCommand, + ChangeDirectoryFailure, ClearDownloadsByStateCommand, ClearHistoryCommand, + CreatePackageCommand, DeleteAccountCommand, DeleteHistoryEntryCommand, DeletePackageCommand, + DisablePluginCommand, EnablePluginCommand, ExportAccountsCommand, ExportAccountsOutcome, + ExportHistoryCommand, ExportHistoryFormat, ImportAccountsCommand, ImportAccountsOutcome, + InstallPluginCommand, MovePackageToFolderCommand, MoveToBottomCommand, MoveToTopCommand, + OpenDownloadFileCommand, OpenDownloadFolderCommand, PackageMoveOutcome, PackagePatch, + PauseAllDownloadsCommand, PauseDownloadCommand, PurgeHistoryCommand, RedownloadCommand, + RedownloadSource, RemoveDownloadCommand, RemoveDownloadFromPackageCommand, ReorderQueueCommand, + ReportBrokenPluginCommand, ResolveLinksCommand, ResolvedLinkDto, ResumeAllDownloadsCommand, + ResumeDownloadCommand, RetryDownloadCommand, SetPackagePasswordCommand, + SetPackagePriorityCommand, SetPriorityCommand, StartDownloadCommand, + TogglePackageAutoExtractCommand, UninstallPluginCommand, UpdateAccountCommand, + UpdateConfigCommand, UpdatePackageCommand, UpdatePluginConfigCommand, ValidateAccountCommand, + ValidationOutcomeDto, VerifyChecksumCommand, VerifyChecksumOutcome, }; use crate::application::error::AppError; use crate::application::queries::{ @@ -47,6 +50,7 @@ use crate::domain::error::DomainError; use crate::domain::model::account::{AccountId, AccountType}; use crate::domain::model::config::{AppConfig, ConfigPatch}; use crate::domain::model::download::{DownloadId, DownloadState}; +use crate::domain::model::package::{PackageId, PackageSourceType}; use crate::domain::model::views::{ DownloadFilter, HistoryFilter, HistorySort, HistorySortField, SortDirection, SortField, SortOrder, StatsPeriod, @@ -2872,6 +2876,199 @@ pub async fn account_traffic_get( .map_err(|e| e.to_string()) } +// ── Packages ──────────────────────────────────────────────────────── + +fn parse_package_source_type(raw: &str) -> Result { + raw.parse::().map_err(|e| e.to_string()) +} + +#[derive(Debug, Clone, Default, serde::Deserialize)] +#[serde(rename_all = "camelCase")] +pub struct PackagePatchDto { + pub name: Option, + /// `Some(Some(_))` sets, `Some(None)` clears, `None` leaves unchanged. + /// We accept the inner value verbatim from the frontend so it can + /// distinguish "set to empty" from "unchanged". + pub folder_path: Option>, + pub priority: Option, + pub auto_extract: Option, +} + +impl PackagePatchDto { + fn into_domain(self) -> PackagePatch { + PackagePatch { + name: self.name, + folder_path: self.folder_path, + priority: self.priority, + auto_extract: self.auto_extract, + } + } +} + +#[derive(Debug, serde::Serialize)] +#[serde(rename_all = "camelCase")] +pub struct PackageMoveOutcomeDto { + pub moved: Vec, + pub failed: Vec, +} + +impl From for PackageMoveOutcomeDto { + fn from(o: PackageMoveOutcome) -> Self { + Self { + moved: o.moved.into_iter().map(|d| d.0).collect(), + failed: o.failed.into_iter().map(Into::into).collect(), + } + } +} + +#[tauri::command] +pub async fn package_create( + state: State<'_, AppState>, + name: String, + source_type: String, + folder_path: Option, +) -> Result { + let source_type = parse_package_source_type(&source_type)?; + state + .command_bus + .handle_create_package(CreatePackageCommand { + name, + source_type, + folder_path, + created_at_ms: now_unix_ms(), + }) + .await + .map(|id| id.as_str().to_string()) + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_update( + state: State<'_, AppState>, + id: String, + patch: PackagePatchDto, +) -> Result<(), String> { + state + .command_bus + .handle_update_package(UpdatePackageCommand { + id: PackageId::new(id), + patch: patch.into_domain(), + }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_delete( + state: State<'_, AppState>, + id: String, + delete_downloads: bool, +) -> Result<(), String> { + state + .command_bus + .handle_delete_package(DeletePackageCommand { + id: PackageId::new(id), + delete_downloads, + }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_set_password( + state: State<'_, AppState>, + id: String, + password: Option, +) -> Result<(), String> { + state + .command_bus + .handle_set_package_password(SetPackagePasswordCommand { + id: PackageId::new(id), + password, + }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_set_priority( + state: State<'_, AppState>, + id: String, + priority: u8, +) -> Result<(), String> { + state + .command_bus + .handle_set_package_priority(SetPackagePriorityCommand { + id: PackageId::new(id), + priority, + }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_move_to_folder( + state: State<'_, AppState>, + id: String, + new_folder: String, +) -> Result { + state + .command_bus + .handle_move_package_to_folder(MovePackageToFolderCommand { + id: PackageId::new(id), + new_folder: PathBuf::from(new_folder), + }) + .await + .map(PackageMoveOutcomeDto::from) + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_toggle_auto_extract( + state: State<'_, AppState>, + id: String, +) -> Result { + state + .command_bus + .handle_toggle_package_auto_extract(TogglePackageAutoExtractCommand { + id: PackageId::new(id), + }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_add_download( + state: State<'_, AppState>, + package_id: String, + download_id: u64, +) -> Result<(), String> { + state + .command_bus + .handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: PackageId::new(package_id), + download_id: DownloadId(download_id), + }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_remove_download( + state: State<'_, AppState>, + package_id: String, + download_id: u64, +) -> Result<(), String> { + state + .command_bus + .handle_remove_download_from_package(RemoveDownloadFromPackageCommand { + package_id: PackageId::new(package_id), + download_id: DownloadId(download_id), + }) + .await + .map_err(|e| e.to_string()) +} + #[cfg(test)] mod tests { use super::{ diff --git a/src-tauri/src/application/command_bus.rs b/src-tauri/src/application/command_bus.rs index 99cf8ac..71eae86 100644 --- a/src-tauri/src/application/command_bus.rs +++ b/src-tauri/src/application/command_bus.rs @@ -10,7 +10,8 @@ use crate::domain::ports::driven::{ AccountCredentialStore, AccountRepository, AccountValidator, ArchiveExtractor, ChecksumComputer, ClipboardObserver, ConfigStore, CredentialStore, DownloadEngine, DownloadRepository, EventBus, FileOpener, FileStorage, HistoryRepository, HttpClient, - PassphraseCodec, PluginConfigStore, PluginLoader, PluginStoreClient, UrlOpener, + PackageRepository, PassphraseCodec, PluginConfigStore, PluginLoader, PluginStoreClient, + UrlOpener, }; /// Central dispatcher for CQRS commands. @@ -36,6 +37,7 @@ pub struct CommandBus { plugin_config_store: Option>, account_repo: Option>, account_credential_store: Option>, + package_repo: Option>, account_validator: Option>, account_selector: Option>, account_rotator: Option>, @@ -82,6 +84,7 @@ impl CommandBus { plugin_config_store: None, account_repo: None, account_credential_store: None, + package_repo: None, account_validator: None, account_selector: None, account_rotator: None, @@ -104,6 +107,18 @@ impl CommandBus { self } + /// Builder-style setter for the package write repository. Optional + /// so test fixtures that never invoke package commands don't have + /// to provide a mock. + pub fn with_package_repo(mut self, repo: Arc) -> Self { + self.package_repo = Some(repo); + self + } + + pub fn package_repo(&self) -> Option<&dyn PackageRepository> { + self.package_repo.as_deref() + } + /// Builder-style setter for the account-validation port (delegates /// to the matching hoster / debrid plugin). pub fn with_account_validator(mut self, validator: Arc) -> Self { diff --git a/src-tauri/src/application/commands/add_download_to_package.rs b/src-tauri/src/application/commands/add_download_to_package.rs new file mode 100644 index 0000000..515cbdc --- /dev/null +++ b/src-tauri/src/application/commands/add_download_to_package.rs @@ -0,0 +1,294 @@ +//! Handler for [`AddDownloadToPackageCommand`](super::AddDownloadToPackageCommand). +//! +//! Verifies that both the package and the download exist, then sets +//! the FK on the download row via `PackageRepository::attach_download`. +//! Idempotent at the repo layer (re-attaching is a no-op). +//! +//! Reassignment (download already belongs to another package) is +//! supported — `attach_download` overwrites the FK. To keep event +//! consumers (counts, lists) consistent, both the source and the +//! destination package emit `PackageUpdated` so the source's listing +//! refreshes alongside the destination's. + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; + +impl CommandBus { + pub async fn handle_add_download_to_package( + &self, + cmd: super::AddDownloadToPackageCommand, + ) -> Result<(), AppError> { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + + if repo.find_by_id(&cmd.package_id)?.is_none() { + return Err(AppError::NotFound(format!( + "Package {} not found", + cmd.package_id + ))); + } + if self.download_repo().find_by_id(cmd.download_id)?.is_none() { + return Err(AppError::NotFound(format!( + "Download {} not found", + cmd.download_id.0 + ))); + } + + let previous_owner = repo.find_package_of_download(cmd.download_id)?; + repo.attach_download(&cmd.package_id, cmd.download_id)?; + + if let Some(prev) = previous_owner.filter(|p| p != &cmd.package_id) { + self.event_bus() + .publish(DomainEvent::PackageUpdated { id: prev }); + } + self.event_bus().publish(DomainEvent::PackageUpdated { + id: cmd.package_id.clone(), + }); + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::{AddDownloadToPackageCommand, CreatePackageCommand}; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::event::DomainEvent; + use crate::domain::model::download::{Download, DownloadId, Url}; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::ports::driven::PackageRepository; + + fn make_download(id: u64) -> Download { + Download::new( + DownloadId(id), + Url::new("http://example.com/f.zip").unwrap(), + format!("file-{id}.zip"), + format!("/tmp/file-{id}.zip"), + ) + } + + #[tokio::test] + async fn test_add_download_to_package_attaches_member() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + dl_repo.seed(make_download(42)); + + bus.handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: id.clone(), + download_id: DownloadId(42), + }) + .await + .expect("attach"); + + assert_eq!( + repo.list_downloads(&id).unwrap(), + vec![DownloadId(42)], + "member registered" + ); + assert!(events.snapshot().iter().any(|e| matches!( + e, + DomainEvent::PackageUpdated { id: x } if x == &id + ))); + } + + #[tokio::test] + async fn test_add_download_to_package_reassignment_emits_for_source_and_destination() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let source = bus + .handle_create_package(CreatePackageCommand { + name: "Src".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + let destination = bus + .handle_create_package(CreatePackageCommand { + name: "Dst".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 1, + }) + .await + .unwrap(); + dl_repo.seed(make_download(7)); + repo.attach_download(&source, DownloadId(7)).unwrap(); + + bus.handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: destination.clone(), + download_id: DownloadId(7), + }) + .await + .expect("reassign"); + + // FK now points at destination, source bucket is empty. + assert_eq!( + repo.list_downloads(&destination).unwrap(), + vec![DownloadId(7)] + ); + assert!(repo.list_downloads(&source).unwrap().is_empty()); + + let snap = events.snapshot(); + let updated_for = |target: &PackageId| { + snap.iter() + .filter(|e| matches!(e, DomainEvent::PackageUpdated { id } if id == target)) + .count() + }; + assert_eq!(updated_for(&source), 1, "source emits once on hand-off"); + assert_eq!( + updated_for(&destination), + 1, + "destination emits once on hand-off" + ); + } + + #[tokio::test] + async fn test_add_download_to_package_same_package_reattach_preserves_order() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo.clone()); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + dl_repo.seed(make_download(1)); + dl_repo.seed(make_download(2)); + for n in [1u64, 2] { + bus.handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: id.clone(), + download_id: DownloadId(n), + }) + .await + .unwrap(); + } + + // Re-attach the first member; the mock must not push it to the + // end of the bucket. Production SQLite is a no-op on + // `UPDATE downloads SET package_id = same`, the mock has to + // mirror that. + bus.handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: id.clone(), + download_id: DownloadId(1), + }) + .await + .unwrap(); + + assert_eq!( + repo.list_downloads(&id).unwrap(), + vec![DownloadId(1), DownloadId(2)], + "same-package reattach must not reorder existing members" + ); + } + + #[tokio::test] + async fn test_add_download_to_package_idempotent_does_not_double_emit() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + dl_repo.seed(make_download(11)); + + for _ in 0..2 { + bus.handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: id.clone(), + download_id: DownloadId(11), + }) + .await + .unwrap(); + } + + // Same destination twice → no source emit (previous_owner == destination). + let updates = events + .snapshot() + .iter() + .filter(|e| matches!(e, DomainEvent::PackageUpdated { id: x } if x == &id)) + .count(); + assert_eq!(updates, 2, "one PackageUpdated per call, never doubled"); + } + + #[tokio::test] + async fn test_add_download_to_package_unknown_package_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo.clone()); + dl_repo.seed(make_download(1)); + + let err = bus + .handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: PackageId::new("ghost"), + download_id: DownloadId(1), + }) + .await + .expect_err("missing pkg"); + assert!(matches!(err, AppError::NotFound(_))); + } + + #[tokio::test] + async fn test_add_download_to_package_unknown_download_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + let err = bus + .handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: id, + download_id: DownloadId(999), + }) + .await + .expect_err("missing dl"); + assert!(matches!(err, AppError::NotFound(_))); + } +} diff --git a/src-tauri/src/application/commands/create_package.rs b/src-tauri/src/application/commands/create_package.rs new file mode 100644 index 0000000..aeb47e6 --- /dev/null +++ b/src-tauri/src/application/commands/create_package.rs @@ -0,0 +1,154 @@ +//! Handler for [`CreatePackageCommand`](super::CreatePackageCommand). +//! +//! Generates a fresh [`PackageId`] (UUID v4), validates the inputs, +//! persists the aggregate via [`PackageRepository`], and emits +//! [`DomainEvent::PackageCreated`] on success. + +use uuid::Uuid; + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; +use crate::domain::model::package::{Package, PackageId}; + +impl CommandBus { + pub async fn handle_create_package( + &self, + cmd: super::CreatePackageCommand, + ) -> Result { + let name = cmd.name.trim(); + if name.is_empty() { + return Err(AppError::Validation( + "package name must not be empty".into(), + )); + } + let folder_path = cmd + .folder_path + .map(|p| p.trim().to_string()) + .filter(|p| !p.is_empty()); + + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + + let id = PackageId::new(Uuid::new_v4().to_string()); + let mut package = Package::new( + id.clone(), + name.to_string(), + cmd.source_type, + cmd.created_at_ms, + ); + if folder_path.is_some() { + package.set_folder_path(folder_path); + } + + repo.save(&package)?; + self.event_bus().publish(DomainEvent::PackageCreated { + id: id.clone(), + name: package.name().to_string(), + }); + Ok(id) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::CreatePackageCommand; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::event::DomainEvent; + use crate::domain::model::package::PackageSourceType; + use crate::domain::ports::driven::PackageRepository; + + fn create_command(name: &str) -> CreatePackageCommand { + CreatePackageCommand { + name: name.into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 1_700_000_000_000, + } + } + + #[tokio::test] + async fn test_create_package_persists_and_emits_event() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo); + + let id = bus + .handle_create_package(create_command("Holiday")) + .await + .expect("create ok"); + + let stored = repo.find_by_id(&id).unwrap().expect("present"); + assert_eq!(stored.name(), "Holiday"); + assert_eq!(stored.source_type(), PackageSourceType::Manual); + assert_eq!(stored.created_at(), 1_700_000_000_000); + + let snapshot = events.snapshot(); + assert!(snapshot.iter().any(|e| matches!( + e, + DomainEvent::PackageCreated { id: ev_id, name } if ev_id == &id && name == "Holiday" + ))); + } + + #[tokio::test] + async fn test_create_package_persists_folder_path_when_provided() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo); + + let id = bus + .handle_create_package(CreatePackageCommand { + name: "Vids".into(), + source_type: PackageSourceType::Playlist, + folder_path: Some("/srv/vids".into()), + created_at_ms: 0, + }) + .await + .expect("create ok"); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.folder_path(), Some("/srv/vids")); + } + + #[tokio::test] + async fn test_create_package_blank_name_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo); + + let err = bus + .handle_create_package(create_command(" ")) + .await + .expect_err("blank name rejected"); + assert!(matches!(err, AppError::Validation(_))); + assert!(repo.list().unwrap().is_empty()); + assert!(events.snapshot().is_empty()); + } + + #[tokio::test] + async fn test_create_package_without_repo_returns_validation() { + let creds = Arc::new(InMemoryCredentialStore::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = + crate::application::commands::tests_support::bus_without_account_ports(events.clone()); + let _ = creds; + let err = bus + .handle_create_package(create_command("X")) + .await + .expect_err("no repo"); + assert!(matches!(err, AppError::Validation(_))); + } +} diff --git a/src-tauri/src/application/commands/delete_package.rs b/src-tauri/src/application/commands/delete_package.rs new file mode 100644 index 0000000..e52a32b --- /dev/null +++ b/src-tauri/src/application/commands/delete_package.rs @@ -0,0 +1,213 @@ +//! Handler for [`DeletePackageCommand`](super::DeletePackageCommand). +//! +//! Two cleanup paths: +//! - `delete_downloads = false` (default): the FK on member downloads +//! is cleared (`detach_download` on each), then the package row is +//! removed. Downloads survive as detached entries. +//! - `delete_downloads = true`: every member download is removed via +//! the existing `RemoveDownloadCommand` (which deletes engine state, +//! files, and the SQLite row), then the package row is removed. +//! +//! In both cases the keyring entry for the package password is best- +//! effort cleaned. Failures are logged but never block the deletion — +//! the package metadata is the source of truth for "package exists". + +use super::set_package_password::package_credential_service_key; +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; + +impl CommandBus { + pub async fn handle_delete_package( + &self, + cmd: super::DeletePackageCommand, + ) -> Result<(), AppError> { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + + // Look up the existing aggregate and its members up front so the + // cascade decision works off a frozen snapshot. NotFound is + // surfaced as a hard error rather than silently no-op'd because + // double-delete is a UI bug, not a benign retry. + repo.find_by_id(&cmd.id)? + .ok_or_else(|| AppError::NotFound(format!("Package {} not found", cmd.id)))?; + let members = repo.list_downloads(&cmd.id)?; + + if cmd.delete_downloads { + for download_id in &members { + self.handle_remove_download(super::RemoveDownloadCommand { + id: *download_id, + delete_files: true, + }) + .await?; + } + } else { + for download_id in &members { + repo.detach_download(*download_id)?; + } + } + + repo.delete(&cmd.id)?; + + let key = package_credential_service_key(&cmd.id); + if let Err(e) = self.credential_store().delete(&key) { + tracing::warn!( + package_id = %cmd.id, + error = %e, + "failed to remove package keyring entry on delete" + ); + } + + self.event_bus().publish(DomainEvent::PackageDeleted { + id: cmd.id, + delete_downloads: cmd.delete_downloads, + }); + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::{CreatePackageCommand, DeletePackageCommand, SetPackagePasswordCommand}; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::event::DomainEvent; + use crate::domain::model::download::{Download, DownloadId, Url}; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::ports::driven::{DownloadRepository, PackageRepository}; + + fn make_download(id: u64) -> Download { + Download::new( + DownloadId(id), + Url::new("http://example.com/f.zip").unwrap(), + format!("file-{id}.zip"), + format!("/tmp/file-{id}.zip"), + ) + } + + async fn seed_package(bus: &crate::application::command_bus::CommandBus) -> PackageId { + bus.handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap() + } + + #[tokio::test] + async fn test_delete_package_without_cascade_detaches_members_and_removes_row() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = seed_package(&bus).await; + + dl_repo.seed(make_download(1)); + dl_repo.seed(make_download(2)); + repo.attach_download(&id, DownloadId(1)).unwrap(); + repo.attach_download(&id, DownloadId(2)).unwrap(); + + bus.handle_delete_package(DeletePackageCommand { + id: id.clone(), + delete_downloads: false, + }) + .await + .expect("delete"); + + assert!(repo.find_by_id(&id).unwrap().is_none()); + // Downloads keep existing. + assert!(dl_repo.find_by_id(DownloadId(1)).unwrap().is_some()); + assert!(dl_repo.find_by_id(DownloadId(2)).unwrap().is_some()); + // FK detached. + assert!(repo.list_downloads(&id).unwrap().is_empty()); + assert!(events.snapshot().iter().any(|e| matches!( + e, + DomainEvent::PackageDeleted { id: x, delete_downloads: false } if x == &id + ))); + } + + #[tokio::test] + async fn test_delete_package_cascade_removes_member_downloads() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = seed_package(&bus).await; + + dl_repo.seed(make_download(10)); + dl_repo.seed(make_download(20)); + repo.attach_download(&id, DownloadId(10)).unwrap(); + repo.attach_download(&id, DownloadId(20)).unwrap(); + + bus.handle_delete_package(DeletePackageCommand { + id: id.clone(), + delete_downloads: true, + }) + .await + .expect("delete cascade"); + + assert!(repo.find_by_id(&id).unwrap().is_none()); + assert!(dl_repo.find_by_id(DownloadId(10)).unwrap().is_none()); + assert!(dl_repo.find_by_id(DownloadId(20)).unwrap().is_none()); + assert!(events.snapshot().iter().any(|e| matches!( + e, + DomainEvent::PackageDeleted { + delete_downloads: true, + .. + } + ))); + } + + #[tokio::test] + async fn test_delete_package_cleans_up_keyring_password() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds.clone(), events, dl_repo); + let id = seed_package(&bus).await; + bus.handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some("k".into()), + }) + .await + .unwrap(); + assert_eq!(creds.entry_count(), 1); + + bus.handle_delete_package(DeletePackageCommand { + id, + delete_downloads: false, + }) + .await + .expect("delete"); + assert_eq!(creds.entry_count(), 0); + } + + #[tokio::test] + async fn test_delete_package_unknown_id_returns_not_found() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + + let err = bus + .handle_delete_package(DeletePackageCommand { + id: PackageId::new("ghost"), + delete_downloads: false, + }) + .await + .expect_err("missing"); + assert!(matches!(err, AppError::NotFound(_))); + } +} diff --git a/src-tauri/src/application/commands/mod.rs b/src-tauri/src/application/commands/mod.rs index a793b45..baae34f 100644 --- a/src-tauri/src/application/commands/mod.rs +++ b/src-tauri/src/application/commands/mod.rs @@ -7,16 +7,20 @@ mod tests_support; mod add_account; +mod add_download_to_package; mod cancel_download; mod change_directory; mod clear_downloads_by_state; +mod create_package; mod delete_account; mod delete_history; +mod delete_package; mod export_accounts; mod export_history; mod extract_archive; mod import_accounts; mod install_plugin; +mod move_package_to_folder; mod move_queue; mod open_download_file; mod open_download_folder; @@ -26,20 +30,25 @@ mod purge_history; mod redownload; mod register_local_file; mod remove_download; +mod remove_download_from_package; mod report_broken_plugin; mod resolve_links; mod resume_all; mod resume_download; mod retry_download; +mod set_package_password; +mod set_package_priority; mod set_priority; mod start_download; pub mod store_install; pub mod store_refresh; mod toggle_clipboard; +mod toggle_package_auto_extract; mod toggle_plugin; mod uninstall_plugin; mod update_account; mod update_config; +mod update_package; mod update_plugin_config; mod validate_account; mod verify_checksum; @@ -49,6 +58,7 @@ use std::path::PathBuf; use crate::domain::model::account::{AccountId, AccountType}; use crate::domain::model::config::ConfigPatch; use crate::domain::model::download::DownloadId; +use crate::domain::model::package::{PackageId, PackageSourceType}; use crate::domain::ports::driving::Command; #[derive(Debug)] @@ -468,6 +478,117 @@ pub struct ImportAccountsOutcome { pub skipped_duplicates: u32, } +// ── Packages ───────────────────────────────────────────────────────── + +/// Build and persist a fresh `Package` aggregate. The handler generates +/// a UUID v4 for the new id so callers don't have to coordinate ids +/// across processes. +#[derive(Debug, Clone)] +pub struct CreatePackageCommand { + pub name: String, + pub source_type: PackageSourceType, + pub folder_path: Option, + pub created_at_ms: u64, +} +impl Command for CreatePackageCommand {} + +/// Partial-mutation payload for [`UpdatePackageCommand`]. Every field is +/// optional; absent values keep the persisted package unchanged. Use +/// [`SetPackagePasswordCommand`] to rotate the keyring secret — the +/// password column itself is not in this patch. +#[derive(Debug, Clone, Default)] +pub struct PackagePatch { + pub name: Option, + pub folder_path: Option>, + pub priority: Option, + pub auto_extract: Option, +} + +#[derive(Debug, Clone)] +pub struct UpdatePackageCommand { + pub id: PackageId, + pub patch: PackagePatch, +} +impl Command for UpdatePackageCommand {} + +/// Delete a package. When `delete_downloads` is `false`, the FK on +/// each member download is cleared (downloads survive); when `true`, +/// each member is removed via [`RemoveDownloadCommand`] before the +/// package row is dropped. +#[derive(Debug, Clone)] +pub struct DeletePackageCommand { + pub id: PackageId, + pub delete_downloads: bool, +} +impl Command for DeletePackageCommand {} + +/// Set or clear the archive password for a package. The secret is +/// persisted in the OS keyring; the SQLite column only stores the +/// keyring service key as a marker — never the plaintext password. +#[derive(Debug, Clone)] +pub struct SetPackagePasswordCommand { + pub id: PackageId, + /// `Some(secret)` rotates the keyring entry, `None` clears it. + pub password: Option, +} +impl Command for SetPackagePasswordCommand {} + +/// Set the package's scheduling priority and propagate the value to +/// every member download. Each impacted download triggers a +/// `DownloadPrioritySet` event so the queue manager re-evaluates +/// scheduling immediately. +#[derive(Debug, Clone)] +pub struct SetPackagePriorityCommand { + pub id: PackageId, + pub priority: u8, +} +impl Command for SetPackagePriorityCommand {} + +/// Move every member download to `new_folder` and persist the new +/// folder path on the package itself. Re-uses the per-download move +/// logic so each child emits `DownloadDirectoryChanged`. +#[derive(Debug, Clone)] +pub struct MovePackageToFolderCommand { + pub id: PackageId, + pub new_folder: PathBuf, +} +impl Command for MovePackageToFolderCommand {} + +/// Toggle the package's `auto_extract` flag. +#[derive(Debug, Clone)] +pub struct TogglePackageAutoExtractCommand { + pub id: PackageId, +} +impl Command for TogglePackageAutoExtractCommand {} + +/// Attach a download to a package (sets the FK on the download row). +/// Idempotent — re-attaching a download already in the package is a +/// no-op. +#[derive(Debug, Clone)] +pub struct AddDownloadToPackageCommand { + pub package_id: PackageId, + pub download_id: DownloadId, +} +impl Command for AddDownloadToPackageCommand {} + +/// Detach a download from a package (clears the FK on the download +/// row). Idempotent. +#[derive(Debug, Clone)] +pub struct RemoveDownloadFromPackageCommand { + pub package_id: PackageId, + pub download_id: DownloadId, +} +impl Command for RemoveDownloadFromPackageCommand {} + +/// Per-child move outcome surfaced by `move_package_to_folder` so the +/// frontend can show partial failures alongside successes (mirrors +/// `ChangeDirectoryBulkOutcome`). +#[derive(Debug, Clone, Default, PartialEq, Eq)] +pub struct PackageMoveOutcome { + pub moved: Vec, + pub failed: Vec, +} + /// Register an already-downloaded local file as a Completed download. /// /// Used after `download_to_file` produces a merged file via yt-dlp. diff --git a/src-tauri/src/application/commands/move_package_to_folder.rs b/src-tauri/src/application/commands/move_package_to_folder.rs new file mode 100644 index 0000000..87c5c47 --- /dev/null +++ b/src-tauri/src/application/commands/move_package_to_folder.rs @@ -0,0 +1,227 @@ +//! Handler for [`MovePackageToFolderCommand`](super::MovePackageToFolderCommand). +//! +//! Walks the package's member downloads and re-uses the per-download +//! move logic (task 13's `ChangeDirectoryCommand`) for each one. The +//! package row's `folder_path` is updated to the new folder so future +//! members default to the same destination. +//! +//! Per-child failures are collected and returned as +//! [`PackageMoveOutcome`] so the frontend can surface partial success +//! without aborting the whole package — same pattern as +//! `ChangeDirectoryBulkCommand`. + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; +use crate::domain::model::package::Package; + +use super::PackageMoveOutcome; +use super::change_directory::ChangeDirectoryFailure; + +impl CommandBus { + pub async fn handle_move_package_to_folder( + &self, + cmd: super::MovePackageToFolderCommand, + ) -> Result { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + + let existing = repo + .find_by_id(&cmd.id)? + .ok_or_else(|| AppError::NotFound(format!("Package {} not found", cmd.id)))?; + + let new_folder_str = cmd.new_folder.to_string_lossy().to_string(); + if new_folder_str.trim().is_empty() { + return Err(AppError::Validation( + "destination folder must not be empty".into(), + )); + } + // Reject relative paths so a crafted IPC payload (e.g. "../") cannot + // walk outside the working directory before the per-download move + // routines run. + if !cmd.new_folder.is_absolute() { + return Err(AppError::Validation( + "destination folder must be an absolute path".into(), + )); + } + + let updated = Package::reconstruct( + existing.id().clone(), + existing.name().to_string(), + existing.source_type(), + Some(new_folder_str), + existing.password().map(str::to_string), + existing.auto_extract(), + existing.priority(), + existing.created_at(), + )?; + repo.save(&updated)?; + + let members = repo.list_downloads(&cmd.id)?; + let mut outcome = PackageMoveOutcome::default(); + for download_id in members { + match self + .handle_change_directory(super::ChangeDirectoryCommand { + id: download_id, + new_destination_dir: cmd.new_folder.clone(), + }) + .await + { + Ok(()) => outcome.moved.push(download_id), + Err(e) => outcome.failed.push(ChangeDirectoryFailure { + id: download_id, + message: e.to_string(), + }), + } + } + + self.event_bus() + .publish(DomainEvent::PackageUpdated { id: cmd.id.clone() }); + Ok(outcome) + } +} + +#[cfg(test)] +mod tests { + use std::path::PathBuf; + use std::sync::Arc; + + use super::super::{CreatePackageCommand, MovePackageToFolderCommand}; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::event::DomainEvent; + use crate::domain::model::download::{Download, DownloadId, Url}; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::ports::driven::{DownloadRepository, PackageRepository}; + + fn make_download(id: u64) -> Download { + Download::new( + DownloadId(id), + Url::new("http://example.com/f.zip").unwrap(), + format!("file-{id}.zip"), + format!("/tmp/file-{id}.zip"), + ) + } + + async fn seed( + bus: &crate::application::command_bus::CommandBus, + repo: &Arc, + dl_repo: &Arc, + members: &[u64], + ) -> PackageId { + let id = bus + .handle_create_package(CreatePackageCommand { + name: "Pkg".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + for d in members { + dl_repo.seed(make_download(*d)); + repo.attach_download(&id, DownloadId(*d)).unwrap(); + } + id + } + + #[tokio::test] + async fn test_move_package_updates_folder_and_each_download_path() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = seed(&bus, &repo, &dl_repo, &[1, 2]).await; + + let outcome = bus + .handle_move_package_to_folder(MovePackageToFolderCommand { + id: id.clone(), + new_folder: PathBuf::from("/srv/new"), + }) + .await + .expect("move ok"); + assert_eq!(outcome.moved.len(), 2); + assert!(outcome.failed.is_empty()); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.folder_path(), Some("/srv/new")); + for i in [1u64, 2] { + let dl = dl_repo.find_by_id(DownloadId(i)).unwrap().unwrap(); + assert_eq!(dl.destination_path(), format!("/srv/new/file-{i}.zip")); + } + assert!(events.snapshot().iter().any(|e| matches!( + e, + DomainEvent::PackageUpdated { id: x } if x == &id + ))); + } + + #[tokio::test] + async fn test_move_package_empty_destination_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo.clone()); + let id = seed(&bus, &repo, &dl_repo, &[]).await; + + let err = bus + .handle_move_package_to_folder(MovePackageToFolderCommand { + id: id.clone(), + new_folder: PathBuf::from(""), + }) + .await + .expect_err("empty path rejected"); + assert!(matches!(err, AppError::Validation(_))); + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert!(stored.folder_path().is_none()); + } + + #[tokio::test] + async fn test_move_package_relative_path_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo.clone()); + let id = seed(&bus, &repo, &dl_repo, &[]).await; + + for relative in ["../escape", "./local", "relative/sub"] { + let err = bus + .handle_move_package_to_folder(MovePackageToFolderCommand { + id: id.clone(), + new_folder: PathBuf::from(relative), + }) + .await + .expect_err("relative rejected"); + assert!( + matches!(err, AppError::Validation(_)), + "expected validation error for {relative:?}" + ); + } + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert!(stored.folder_path().is_none()); + } + + #[tokio::test] + async fn test_move_package_unknown_id_returns_not_found() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + + let err = bus + .handle_move_package_to_folder(MovePackageToFolderCommand { + id: PackageId::new("ghost"), + new_folder: PathBuf::from("/x"), + }) + .await + .expect_err("missing"); + assert!(matches!(err, AppError::NotFound(_))); + } +} diff --git a/src-tauri/src/application/commands/remove_download_from_package.rs b/src-tauri/src/application/commands/remove_download_from_package.rs new file mode 100644 index 0000000..298393b --- /dev/null +++ b/src-tauri/src/application/commands/remove_download_from_package.rs @@ -0,0 +1,194 @@ +//! Handler for [`RemoveDownloadFromPackageCommand`](super::RemoveDownloadFromPackageCommand). +//! +//! Detaches the download from `cmd.package_id` only when the FK +//! actually points there. The FK is a singleton — package_id is either +//! set or NULL — so a stale `package_id` paired with a real +//! `download_id` could otherwise silently strip the download from a +//! different package. Idempotent for already-loose downloads (no-op, +//! no event); rejects the operation when the download belongs to a +//! different package so the UI can refresh its stale state. + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; + +impl CommandBus { + pub async fn handle_remove_download_from_package( + &self, + cmd: super::RemoveDownloadFromPackageCommand, + ) -> Result<(), AppError> { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + + if repo.find_by_id(&cmd.package_id)?.is_none() { + return Err(AppError::NotFound(format!( + "Package {} not found", + cmd.package_id + ))); + } + + match repo.find_package_of_download(cmd.download_id)? { + None => return Ok(()), + Some(owner) if owner != cmd.package_id => { + return Err(AppError::Validation(format!( + "Download {} is not a member of package {}", + cmd.download_id.0, cmd.package_id + ))); + } + Some(_) => {} + } + + repo.detach_download(cmd.download_id)?; + self.event_bus().publish(DomainEvent::PackageUpdated { + id: cmd.package_id.clone(), + }); + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::{ + AddDownloadToPackageCommand, CreatePackageCommand, RemoveDownloadFromPackageCommand, + }; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::model::download::{Download, DownloadId, Url}; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::ports::driven::PackageRepository; + + fn make_download(id: u64) -> Download { + Download::new( + DownloadId(id), + Url::new("http://example.com/f.zip").unwrap(), + format!("file-{id}.zip"), + format!("/tmp/file-{id}.zip"), + ) + } + + #[tokio::test] + async fn test_remove_download_from_package_detaches_member() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo.clone()); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + dl_repo.seed(make_download(7)); + bus.handle_add_download_to_package(AddDownloadToPackageCommand { + package_id: id.clone(), + download_id: DownloadId(7), + }) + .await + .unwrap(); + + bus.handle_remove_download_from_package(RemoveDownloadFromPackageCommand { + package_id: id.clone(), + download_id: DownloadId(7), + }) + .await + .expect("detach"); + + assert!(repo.list_downloads(&id).unwrap().is_empty()); + } + + #[tokio::test] + async fn test_remove_download_from_package_idempotent_when_not_attached() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + bus.handle_remove_download_from_package(RemoveDownloadFromPackageCommand { + package_id: id, + download_id: DownloadId(404), + }) + .await + .expect("idempotent"); + } + + #[tokio::test] + async fn test_remove_download_from_package_rejected_when_member_of_other_package() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo.clone()); + let owning = bus + .handle_create_package(CreatePackageCommand { + name: "Owns".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + let other = bus + .handle_create_package(CreatePackageCommand { + name: "Other".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 1, + }) + .await + .unwrap(); + dl_repo.seed(make_download(42)); + repo.attach_download(&owning, DownloadId(42)).unwrap(); + + let err = bus + .handle_remove_download_from_package(RemoveDownloadFromPackageCommand { + package_id: other.clone(), + download_id: DownloadId(42), + }) + .await + .expect_err("wrong package rejected"); + assert!(matches!(err, AppError::Validation(_))); + + // Membership untouched on the rightful owner. + assert_eq!( + repo.list_downloads(&owning).unwrap(), + vec![DownloadId(42)], + "download stays attached to its real owner" + ); + } + + #[tokio::test] + async fn test_remove_download_from_package_unknown_package_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + let err = bus + .handle_remove_download_from_package(RemoveDownloadFromPackageCommand { + package_id: PackageId::new("ghost"), + download_id: DownloadId(1), + }) + .await + .expect_err("missing pkg"); + assert!(matches!(err, AppError::NotFound(_))); + } +} diff --git a/src-tauri/src/application/commands/set_package_password.rs b/src-tauri/src/application/commands/set_package_password.rs new file mode 100644 index 0000000..05e49ae --- /dev/null +++ b/src-tauri/src/application/commands/set_package_password.rs @@ -0,0 +1,359 @@ +//! Handler for [`SetPackagePasswordCommand`](super::SetPackagePasswordCommand). +//! +//! The plaintext password is persisted in the OS keyring under the +//! convention `vortex.package.` via [`CredentialStore`]. The +//! `packages.password` SQLite column only ever stores the keyring +//! service key as a marker — it never sees the password itself. +//! +//! `password = None` clears both the keyring entry and the marker. +//! Idempotent: clearing an already-empty entry is a no-op. +//! +//! Recovery on keyring failure: the marker is persisted first so a +//! crash between SQLite and the keyring leaves the DB consistent. If +//! the keyring write fails, the marker row is rolled back to its +//! previous value and any partial keyring entry is best-effort +//! cleared, so callers never observe a row claiming a secret that is +//! not actually there. + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; +use crate::domain::model::credential::Credential; +use crate::domain::model::package::{Package, PackageId}; + +/// Build the keyring service key for a package id. Centralised so the +/// IPC layer, the export/import flow and the keyring port use the same +/// scheme. +pub fn package_credential_service_key(id: &PackageId) -> String { + format!("vortex.package.{}", id.as_str()) +} + +impl CommandBus { + pub async fn handle_set_package_password( + &self, + cmd: super::SetPackagePasswordCommand, + ) -> Result<(), AppError> { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + let credentials = self.credential_store(); + + let existing = repo + .find_by_id(&cmd.id)? + .ok_or_else(|| AppError::NotFound(format!("Package {} not found", cmd.id)))?; + + let key = package_credential_service_key(&cmd.id); + let marker = match cmd.password.as_deref() { + Some("") => { + return Err(AppError::Validation( + "package password must not be empty (pass None to clear)".into(), + )); + } + Some(_) => Some(key.clone()), + None => None, + }; + + // Capture the existing credential BEFORE persisting the new row so + // a keyring rotation failure can restore exactly what was there + // before. Without this snapshot the cleanup branch below would + // unconditionally `delete(&key)` and erase a previously valid + // secret on a transient backend error. + let previous_credential = credentials.get(&key)?; + + // Persist the marker BEFORE touching the keyring so a crash between + // the two writes leaves the DB consistent. The reverse order would + // leave an orphan keyring secret with no DB marker pointing at it. + let updated = Package::reconstruct( + existing.id().clone(), + existing.name().to_string(), + existing.source_type(), + existing.folder_path().map(str::to_string), + marker, + existing.auto_extract(), + existing.priority(), + existing.created_at(), + )?; + repo.save(&updated)?; + + let keyring_op = match cmd.password.as_deref() { + Some(secret) => credentials.store(&key, &Credential::new(String::new(), secret)), + None => credentials.delete(&key), + }; + if let Err(e) = keyring_op { + // Roll the marker back so the row never claims a secret the + // keyring does not have. Mirrors `update_account`'s recovery + // path; both rollback and partial-write cleanup are best + // effort because the keyring backend may have side-effects we + // cannot undo. + if let Err(rollback_err) = repo.save(&existing) { + tracing::warn!( + package_id = %cmd.id, + keyring_error = %e, + rollback_error = %rollback_err, + "package marker rollback failed after keyring error; row metadata diverges from keyring" + ); + } + // Restore the prior keyring entry (or wipe if there was none) + // so a transient store failure cannot destroy an + // already-configured password while the command was rotating. + let restore_result = match previous_credential { + Some(prev) => credentials.store(&key, &prev), + None => credentials.delete(&key), + }; + if let Err(restore_err) = restore_result { + tracing::warn!( + package_id = %cmd.id, + keyring_error = %e, + restore_error = %restore_err, + "keyring restore failed after rollback; keyring may hold a partially written secret" + ); + } + return Err(e.into()); + } + + self.event_bus() + .publish(DomainEvent::PackageUpdated { id: cmd.id.clone() }); + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::{CreatePackageCommand, SetPackagePasswordCommand}; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::event::DomainEvent; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::ports::driven::{CredentialStore, PackageRepository}; + + async fn seed(bus: &crate::application::command_bus::CommandBus) -> PackageId { + bus.handle_create_package(CreatePackageCommand { + name: "Pkg".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap() + } + + #[tokio::test] + async fn test_set_package_password_persists_secret_in_keyring_only() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds.clone(), events.clone(), dl_repo); + let id = seed(&bus).await; + + bus.handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some("s3cret".into()), + }) + .await + .expect("set ok"); + + let key = format!("vortex.package.{}", id.as_str()); + let secret = creds.get(&key).unwrap().expect("present"); + assert_eq!(secret.password(), "s3cret"); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!( + stored.password(), + Some(key.as_str()), + "package row holds the keyring service key marker, never the secret" + ); + assert!(events.snapshot().iter().any(|e| matches!( + e, + DomainEvent::PackageUpdated { id: x } if x == &id + ))); + } + + #[tokio::test] + async fn test_set_package_password_clear_removes_keyring_and_marker() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds.clone(), events, dl_repo); + let id = seed(&bus).await; + + bus.handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some("x".into()), + }) + .await + .unwrap(); + bus.handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: None, + }) + .await + .unwrap(); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert!(stored.password().is_none()); + assert_eq!(creds.entry_count(), 0); + } + + #[tokio::test] + async fn test_set_package_password_empty_string_is_validation_error() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds.clone(), events, dl_repo); + let id = seed(&bus).await; + + let err = bus + .handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some(String::new()), + }) + .await + .expect_err("empty rejected"); + assert!(matches!(err, AppError::Validation(_))); + assert_eq!(creds.entry_count(), 0); + } + + #[tokio::test] + async fn test_set_package_password_keyring_failure_rolls_back_marker_on_set() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds.clone(), events.clone(), dl_repo); + let id = seed(&bus).await; + + creds.set_store_fails(true); + let err = bus + .handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some("never-lands".into()), + }) + .await + .expect_err("keyring fail surfaces"); + assert!(matches!(err, AppError::Domain(_))); + + // DB marker rolled back to None (the original state) and keyring + // is empty — no event emitted because the command failed. + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert!( + stored.password().is_none(), + "marker must roll back to original on keyring failure" + ); + assert_eq!(creds.entry_count(), 0); + assert!( + !events + .snapshot() + .iter() + .any(|e| matches!(e, DomainEvent::PackageUpdated { id: x } if x == &id)), + "no PackageUpdated emitted for a failed command" + ); + } + + #[tokio::test] + async fn test_set_package_password_failed_rotation_preserves_previous_secret() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds.clone(), events, dl_repo); + let id = seed(&bus).await; + bus.handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some("original".into()), + }) + .await + .unwrap(); + let key = format!("vortex.package.{}", id.as_str()); + assert_eq!(creds.get(&key).unwrap().unwrap().password(), "original"); + + // Rotation fails partway: the new write errors but the previous + // secret must NOT be destroyed by the cleanup branch. The current + // marker should also be back to pointing at the (still valid) + // existing key. + creds.set_store_fails(true); + let err = bus + .handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some("rotated".into()), + }) + .await + .expect_err("rotate fail surfaces"); + assert!(matches!(err, AppError::Domain(_))); + creds.set_store_fails(false); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!( + stored.password(), + Some(key.as_str()), + "marker rolled back to previous keyring pointer" + ); + assert_eq!( + creds.get(&key).unwrap().expect("survives").password(), + "original", + "failed rotation must not erase the prior valid secret" + ); + } + + #[tokio::test] + async fn test_set_package_password_keyring_failure_rolls_back_marker_on_clear() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds.clone(), events, dl_repo); + let id = seed(&bus).await; + bus.handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: Some("seed".into()), + }) + .await + .unwrap(); + let key = format!("vortex.package.{}", id.as_str()); + + creds.set_delete_fails(true); + let err = bus + .handle_set_package_password(SetPackagePasswordCommand { + id: id.clone(), + password: None, + }) + .await + .expect_err("delete fail surfaces"); + assert!(matches!(err, AppError::Domain(_))); + + // Marker preserved, secret remains in keyring — both sides + // unchanged so the next retry can converge. + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.password(), Some(key.as_str())); + creds.set_delete_fails(false); + assert_eq!( + creds.get(&key).unwrap().expect("still present").password(), + "seed" + ); + } + + #[tokio::test] + async fn test_set_package_password_unknown_id_returns_not_found() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + + let err = bus + .handle_set_package_password(SetPackagePasswordCommand { + id: PackageId::new("ghost"), + password: Some("x".into()), + }) + .await + .expect_err("missing"); + assert!(matches!(err, AppError::NotFound(_))); + } +} diff --git a/src-tauri/src/application/commands/set_package_priority.rs b/src-tauri/src/application/commands/set_package_priority.rs new file mode 100644 index 0000000..16704d5 --- /dev/null +++ b/src-tauri/src/application/commands/set_package_priority.rs @@ -0,0 +1,258 @@ +//! Handler for [`SetPackagePriorityCommand`](super::SetPackagePriorityCommand). +//! +//! Persists the new priority on the package row, then loops through +//! every member download and updates its `priority` via the existing +//! per-download `Priority` aggregate. Each impacted download triggers +//! a [`DomainEvent::DownloadPrioritySet`] so the queue manager re- +//! evaluates scheduling. The package itself emits a single +//! [`DomainEvent::PackageUpdated`] carrier event. +//! +//! Member downloads that have disappeared (FK left dangling) are +//! skipped with a debug log — the package row is the source of truth +//! for "this priority is now N" and we don't want a stale FK to abort +//! the cascade. + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; +use crate::domain::model::package::Package; +use crate::domain::model::queue::Priority; + +impl CommandBus { + pub async fn handle_set_package_priority( + &self, + cmd: super::SetPackagePriorityCommand, + ) -> Result<(), AppError> { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + + let existing = repo + .find_by_id(&cmd.id)? + .ok_or_else(|| AppError::NotFound(format!("Package {} not found", cmd.id)))?; + + // Validate the priority via the aggregate's invariant before any + // mutation so a bad value never produces partial cascade state. + let domain_priority = Priority::new(cmd.priority)?; + + let updated = Package::reconstruct( + existing.id().clone(), + existing.name().to_string(), + existing.source_type(), + existing.folder_path().map(str::to_string), + existing.password().map(str::to_string), + existing.auto_extract(), + cmd.priority, + existing.created_at(), + )?; + repo.save(&updated)?; + + let members = repo.list_downloads(&cmd.id)?; + for download_id in members { + let download = match self.download_repo().find_by_id(download_id)? { + Some(d) => d, + None => { + tracing::debug!( + package_id = %cmd.id, + download_id = download_id.0, + "skipping cascade: member download missing" + ); + continue; + } + }; + let next = download.with_priority(domain_priority); + self.download_repo().save(&next)?; + self.event_bus().publish(DomainEvent::DownloadPrioritySet { + id: download_id, + priority: cmd.priority, + }); + } + + self.event_bus() + .publish(DomainEvent::PackageUpdated { id: cmd.id.clone() }); + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::{CreatePackageCommand, SetPackagePriorityCommand}; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::event::DomainEvent; + use crate::domain::model::download::{Download, DownloadId, Url}; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::model::queue::Priority; + use crate::domain::ports::driven::{DownloadRepository, PackageRepository}; + + fn make_download(id: u64) -> Download { + Download::new( + DownloadId(id), + Url::new("http://example.com/f.zip").unwrap(), + format!("file-{id}.zip"), + format!("/tmp/file-{id}.zip"), + ) + } + + async fn seed_package_with_members( + bus: &crate::application::command_bus::CommandBus, + repo: &Arc, + dl_repo: &Arc, + member_ids: &[u64], + ) -> PackageId { + let id = bus + .handle_create_package(CreatePackageCommand { + name: "Pkg".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + for d in member_ids { + dl_repo.seed(make_download(*d)); + repo.attach_download(&id, DownloadId(*d)).unwrap(); + } + id + } + + #[tokio::test] + async fn test_set_package_priority_updates_package_row() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = seed_package_with_members(&bus, &repo, &dl_repo, &[]).await; + + bus.handle_set_package_priority(SetPackagePriorityCommand { + id: id.clone(), + priority: 8, + }) + .await + .expect("set priority"); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.priority(), 8); + assert!(events.snapshot().iter().any(|e| matches!( + e, + DomainEvent::PackageUpdated { id: x } if x == &id + ))); + } + + #[tokio::test] + async fn test_set_package_priority_propagates_to_each_member_download() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = seed_package_with_members(&bus, &repo, &dl_repo, &[1, 2, 3]).await; + + bus.handle_set_package_priority(SetPackagePriorityCommand { + id: id.clone(), + priority: 9, + }) + .await + .expect("set"); + + // Each member persisted with the new priority. + for i in [1u64, 2, 3] { + let dl = dl_repo.find_by_id(DownloadId(i)).unwrap().unwrap(); + assert_eq!(dl.priority(), &Priority::new(9).unwrap()); + } + + // One DownloadPrioritySet event per member. + let snap = events.snapshot(); + let events_count = snap + .iter() + .filter(|e| matches!(e, DomainEvent::DownloadPrioritySet { priority: 9, .. })) + .count(); + assert_eq!( + events_count, 3, + "expected one DownloadPrioritySet per member download" + ); + } + + #[tokio::test] + async fn test_set_package_priority_invalid_priority_does_not_mutate_package() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo.clone()); + let id = seed_package_with_members(&bus, &repo, &dl_repo, &[]).await; + + let err = bus + .handle_set_package_priority(SetPackagePriorityCommand { + id: id.clone(), + priority: 0, + }) + .await + .expect_err("0 rejected"); + assert!(matches!(err, AppError::Domain(_))); + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.priority(), 5, "package row untouched on validation"); + } + + #[tokio::test] + async fn test_set_package_priority_unknown_id_returns_not_found() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + + let err = bus + .handle_set_package_priority(SetPackagePriorityCommand { + id: PackageId::new("ghost"), + priority: 5, + }) + .await + .expect_err("missing"); + assert!(matches!(err, AppError::NotFound(_))); + } + + #[tokio::test] + async fn test_set_package_priority_skips_dangling_member_silently() { + // FK can be left dangling if a download was hard-deleted before + // its package detach ran. The cascade must still update every + // *existing* member and never abort. + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo.clone()); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "Pkg".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + // Member 7 exists, 999 doesn't. Both attached. + dl_repo.seed(make_download(7)); + repo.attach_download(&id, DownloadId(7)).unwrap(); + repo.attach_download(&id, DownloadId(999)).unwrap(); + + bus.handle_set_package_priority(SetPackagePriorityCommand { id, priority: 6 }) + .await + .expect("cascade tolerates dangling member"); + + let dl = dl_repo.find_by_id(DownloadId(7)).unwrap().unwrap(); + assert_eq!(dl.priority(), &Priority::new(6).unwrap()); + let cascade_count = events + .snapshot() + .iter() + .filter(|e| matches!(e, DomainEvent::DownloadPrioritySet { .. })) + .count(); + assert_eq!(cascade_count, 1, "only the existing member emits"); + } +} diff --git a/src-tauri/src/application/commands/tests_support.rs b/src-tauri/src/application/commands/tests_support.rs index 38d96f8..6b1427a 100644 --- a/src-tauri/src/application/commands/tests_support.rs +++ b/src-tauri/src/application/commands/tests_support.rs @@ -6,6 +6,7 @@ use std::collections::HashMap; use std::path::Path; +use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::{Arc, Mutex}; use crate::application::command_bus::CommandBus; @@ -19,11 +20,12 @@ use crate::domain::model::credential::Credential; use crate::domain::model::download::{Download, DownloadId, DownloadState}; use crate::domain::model::http::HttpResponse; use crate::domain::model::meta::DownloadMeta; +use crate::domain::model::package::{Package, PackageId}; use crate::domain::model::plugin::{PluginInfo, PluginManifest}; use crate::domain::ports::driven::{ AccountCredentialStore, AccountRepository, AccountValidator, ArchiveExtractor, ClipboardObserver, ConfigStore, CredentialStore, DownloadEngine, DownloadRepository, EventBus, - FileStorage, HttpClient, PassphraseCodec, PluginLoader, ValidationOutcome, + FileStorage, HttpClient, PackageRepository, PassphraseCodec, PluginLoader, ValidationOutcome, }; // ── In-memory account repository ───────────────────────────────────── @@ -149,18 +151,6 @@ impl FakeAccountCredentialStore { pub(crate) fn entry_count(&self) -> usize { self.entries.lock().unwrap().len() } - - pub(crate) fn snapshot(&self) -> Vec<(AccountId, String)> { - let mut entries: Vec<(AccountId, String)> = self - .entries - .lock() - .unwrap() - .iter() - .map(|(k, v)| (k.clone(), v.clone())) - .collect(); - entries.sort_by(|a, b| a.0.as_str().cmp(b.0.as_str())); - entries - } } impl AccountCredentialStore for FakeAccountCredentialStore { @@ -304,6 +294,124 @@ impl PassphraseCodec for FakePassphraseCodec { } } +// ── In-memory package repository ───────────────────────────────────── + +pub(crate) struct InMemoryPackageRepo { + store: Mutex>, + members: Mutex>>, +} + +impl InMemoryPackageRepo { + pub(crate) fn new() -> Self { + Self { + store: Mutex::new(HashMap::new()), + members: Mutex::new(HashMap::new()), + } + } + + pub(crate) fn snapshot(&self) -> Vec { + let mut packages: Vec = self.store.lock().unwrap().values().cloned().collect(); + packages.sort_by(|a, b| a.id().as_str().cmp(b.id().as_str())); + packages + } +} + +impl PackageRepository for InMemoryPackageRepo { + fn find_by_id(&self, id: &PackageId) -> Result, DomainError> { + Ok(self.store.lock().unwrap().get(id).cloned()) + } + + fn save(&self, package: &Package) -> Result<(), DomainError> { + let mut guard = self.store.lock().unwrap(); + let created_at = match guard.get(package.id()) { + Some(existing) => existing.created_at(), + None => package.created_at(), + }; + let stored = Package::reconstruct( + package.id().clone(), + package.name().to_string(), + package.source_type(), + package.folder_path().map(str::to_string), + package.password().map(str::to_string), + package.auto_extract(), + package.priority(), + created_at, + )?; + guard.insert(package.id().clone(), stored); + Ok(()) + } + + fn list(&self) -> Result, DomainError> { + Ok(self.snapshot()) + } + + fn delete(&self, id: &PackageId) -> Result<(), DomainError> { + self.store.lock().unwrap().remove(id); + self.members.lock().unwrap().remove(id); + Ok(()) + } + + fn list_downloads(&self, id: &PackageId) -> Result, DomainError> { + Ok(self + .members + .lock() + .unwrap() + .get(id) + .cloned() + .unwrap_or_default()) + } + + fn attach_download( + &self, + package_id: &PackageId, + download_id: DownloadId, + ) -> Result<(), DomainError> { + let mut guard = self.members.lock().unwrap(); + // Same-package reattach must be a true no-op so the mock matches + // the SQLite adapter (which never moves rows on `UPDATE ... WHERE + // package_id = same`). Detach from foreign packages first, then + // bail if the download already lives in the target bucket so its + // position is preserved. + let already_in_target = guard + .get(package_id) + .is_some_and(|entries| entries.contains(&download_id)); + for (pkg, entries) in guard.iter_mut() { + if pkg != package_id { + entries.retain(|d| d != &download_id); + } + } + if already_in_target { + return Ok(()); + } + guard + .entry(package_id.clone()) + .or_default() + .push(download_id); + Ok(()) + } + + fn detach_download(&self, download_id: DownloadId) -> Result<(), DomainError> { + let mut guard = self.members.lock().unwrap(); + for entries in guard.values_mut() { + entries.retain(|d| d != &download_id); + } + Ok(()) + } + + fn find_package_of_download( + &self, + download_id: DownloadId, + ) -> Result, DomainError> { + let guard = self.members.lock().unwrap(); + for (pkg, members) in guard.iter() { + if members.iter().any(|d| d == &download_id) { + return Ok(Some(pkg.clone())); + } + } + Ok(None) + } +} + // ── Capturing event bus ────────────────────────────────────────────── pub(crate) struct CapturingEventBus { @@ -381,6 +489,19 @@ impl FileStorage for StubFileStorage { fn delete_meta(&self, _path: &Path) -> Result<(), DomainError> { Ok(()) } + // Stubbed to no-ops: package-handler tests exercise the move logic + // through `move_package_to_folder` and don't run a real filesystem. + // The default impls return errors which would mask the move-package + // tests' real assertions, so we override here. + fn file_exists(&self, _path: &Path) -> Result { + Ok(false) + } + fn move_file(&self, _from: &Path, _to: &Path) -> Result<(), DomainError> { + Ok(()) + } + fn move_meta(&self, _from: &Path, _to: &Path) -> Result<(), DomainError> { + Ok(()) + } } struct StubHttpClient; @@ -497,6 +618,113 @@ impl ArchiveExtractor for StubArchiveExtractor { } } +// ── Seedable in-memory download repo (for package handler tests) ───── + +pub(crate) struct InMemoryDownloadRepo { + store: Mutex>, +} + +impl InMemoryDownloadRepo { + pub(crate) fn new() -> Self { + Self { + store: Mutex::new(HashMap::new()), + } + } + + pub(crate) fn seed(&self, download: Download) { + self.store.lock().unwrap().insert(download.id(), download); + } +} + +impl DownloadRepository for InMemoryDownloadRepo { + fn find_by_id(&self, id: DownloadId) -> Result, DomainError> { + Ok(self.store.lock().unwrap().get(&id).cloned()) + } + + fn save(&self, d: &Download) -> Result<(), DomainError> { + self.store.lock().unwrap().insert(d.id(), d.clone()); + Ok(()) + } + + fn delete(&self, id: DownloadId) -> Result<(), DomainError> { + self.store.lock().unwrap().remove(&id); + Ok(()) + } + + fn find_by_state(&self, state: DownloadState) -> Result, DomainError> { + Ok(self + .store + .lock() + .unwrap() + .values() + .filter(|d| d.state() == state) + .cloned() + .collect()) + } +} + +// ── In-memory credential store ─────────────────────────────────────── + +pub(crate) struct InMemoryCredentialStore { + entries: Mutex>, + fail_store: AtomicBool, + fail_delete: AtomicBool, +} + +impl InMemoryCredentialStore { + pub(crate) fn new() -> Self { + Self { + entries: Mutex::new(HashMap::new()), + fail_store: AtomicBool::new(false), + fail_delete: AtomicBool::new(false), + } + } + + pub(crate) fn entry_count(&self) -> usize { + self.entries.lock().unwrap().len() + } + + /// Force every subsequent `store` call to return an error. Used by + /// rollback tests; resets when toggled back to `false`. + pub(crate) fn set_store_fails(&self, on: bool) { + self.fail_store.store(on, Ordering::SeqCst); + } + + /// Force every subsequent `delete` call to return an error. + pub(crate) fn set_delete_fails(&self, on: bool) { + self.fail_delete.store(on, Ordering::SeqCst); + } +} + +impl CredentialStore for InMemoryCredentialStore { + fn get(&self, service: &str) -> Result, DomainError> { + Ok(self.entries.lock().unwrap().get(service).cloned()) + } + + fn store(&self, service: &str, credential: &Credential) -> Result<(), DomainError> { + if self.fail_store.load(Ordering::SeqCst) { + return Err(DomainError::ValidationError( + "credential store: simulated failure".into(), + )); + } + self.entries + .lock() + .unwrap() + .insert(service.to_string(), credential.clone()); + Ok(()) + } + + fn delete(&self, service: &str) -> Result<(), DomainError> { + if self.fail_delete.load(Ordering::SeqCst) { + return Err(DomainError::ValidationError( + "credential store: simulated failure".into(), + )); + } + self.entries.lock().unwrap().remove(service); + Ok(()) + } +} + /// Build a [`CommandBus`] wired with the supplied account ports plus /// stubs for everything else. pub(crate) fn build_account_bus( @@ -532,6 +760,32 @@ pub(crate) fn build_account_bus( bus } +/// Build a [`CommandBus`] wired with the package ports needed by the +/// package-command handlers. The download write repo is supplied so +/// `set_priority` and `move_to_folder` can read/save member downloads. +pub(crate) fn build_package_bus( + package_repo: Arc, + credential_store: Arc, + event_bus: Arc, + download_repo: Arc, +) -> CommandBus { + CommandBus::new( + download_repo, + Arc::new(StubDownloadEngine), + event_bus, + Arc::new(StubFileStorage), + Arc::new(StubHttpClient), + Arc::new(StubPluginLoader), + Arc::new(StubConfigStore), + credential_store, + Arc::new(StubClipboardObserver), + Arc::new(StubArchiveExtractor), + Arc::new(NoopHistoryRepo), + None, + ) + .with_package_repo(package_repo) +} + /// Build a bus with no account ports — used to assert handlers refuse /// to run when their dependencies are missing. pub(crate) fn bus_without_account_ports(event_bus: Arc) -> CommandBus { diff --git a/src-tauri/src/application/commands/toggle_package_auto_extract.rs b/src-tauri/src/application/commands/toggle_package_auto_extract.rs new file mode 100644 index 0000000..7f0775a --- /dev/null +++ b/src-tauri/src/application/commands/toggle_package_auto_extract.rs @@ -0,0 +1,103 @@ +//! Handler for [`TogglePackageAutoExtractCommand`](super::TogglePackageAutoExtractCommand). +//! +//! Flips the `auto_extract` flag on the package row. Convenience over +//! [`UpdatePackageCommand`] for a one-shot UI toggle (kebab menu) so +//! callers don't have to read-modify-write the current value. + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; +use crate::domain::model::package::Package; + +impl CommandBus { + pub async fn handle_toggle_package_auto_extract( + &self, + cmd: super::TogglePackageAutoExtractCommand, + ) -> Result { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + let existing = repo + .find_by_id(&cmd.id)? + .ok_or_else(|| AppError::NotFound(format!("Package {} not found", cmd.id)))?; + let next_value = !existing.auto_extract(); + let updated = Package::reconstruct( + existing.id().clone(), + existing.name().to_string(), + existing.source_type(), + existing.folder_path().map(str::to_string), + existing.password().map(str::to_string), + next_value, + existing.priority(), + existing.created_at(), + )?; + repo.save(&updated)?; + self.event_bus() + .publish(DomainEvent::PackageUpdated { id: cmd.id.clone() }); + Ok(next_value) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::{CreatePackageCommand, TogglePackageAutoExtractCommand}; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::ports::driven::PackageRepository; + + #[tokio::test] + async fn test_toggle_auto_extract_flips_value_and_returns_new_state() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo); + let id = bus + .handle_create_package(CreatePackageCommand { + name: "P".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 0, + }) + .await + .unwrap(); + + // Default is true (Package::new sets auto_extract = true) → first + // toggle returns false, second returns true. + let after_first = bus + .handle_toggle_package_auto_extract(TogglePackageAutoExtractCommand { id: id.clone() }) + .await + .unwrap(); + assert!(!after_first); + assert!(!repo.find_by_id(&id).unwrap().unwrap().auto_extract()); + + let after_second = bus + .handle_toggle_package_auto_extract(TogglePackageAutoExtractCommand { id: id.clone() }) + .await + .unwrap(); + assert!(after_second); + assert!(repo.find_by_id(&id).unwrap().unwrap().auto_extract()); + } + + #[tokio::test] + async fn test_toggle_auto_extract_unknown_id_returns_not_found() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + let err = bus + .handle_toggle_package_auto_extract(TogglePackageAutoExtractCommand { + id: PackageId::new("ghost"), + }) + .await + .expect_err("missing"); + assert!(matches!(err, AppError::NotFound(_))); + } +} diff --git a/src-tauri/src/application/commands/update_package.rs b/src-tauri/src/application/commands/update_package.rs new file mode 100644 index 0000000..2bfa849 --- /dev/null +++ b/src-tauri/src/application/commands/update_package.rs @@ -0,0 +1,262 @@ +//! Handler for [`UpdatePackageCommand`](super::UpdatePackageCommand). +//! +//! Applies a [`PackagePatch`](super::PackagePatch) to an existing +//! package and persists the result. Optional fields left as `None` +//! keep the persisted value untouched. +//! +//! Priority changes via this handler do NOT cascade to member +//! downloads — that is `set_package_priority`'s job. Use this command +//! for plain rename / folder / auto-extract / package-priority edits. + +use crate::application::command_bus::CommandBus; +use crate::application::error::AppError; +use crate::domain::event::DomainEvent; +use crate::domain::model::package::Package; + +impl CommandBus { + pub async fn handle_update_package( + &self, + cmd: super::UpdatePackageCommand, + ) -> Result<(), AppError> { + let repo = self + .package_repo() + .ok_or_else(|| AppError::Validation("package repository not configured".into()))?; + + let existing = repo + .find_by_id(&cmd.id)? + .ok_or_else(|| AppError::NotFound(format!("Package {} not found", cmd.id)))?; + + let mut updated = clone_for_update(&existing); + + if let Some(name) = cmd.patch.name { + let trimmed = name.trim(); + if trimmed.is_empty() { + return Err(AppError::Validation( + "package name must not be empty".into(), + )); + } + updated = Package::reconstruct( + updated.id().clone(), + trimmed.to_string(), + updated.source_type(), + updated.folder_path().map(str::to_string), + updated.password().map(str::to_string), + updated.auto_extract(), + updated.priority(), + updated.created_at(), + )?; + } + if let Some(folder) = cmd.patch.folder_path { + let normalised = folder + .map(|s| s.trim().to_string()) + .filter(|s| !s.is_empty()); + updated.set_folder_path(normalised); + } + if let Some(priority) = cmd.patch.priority { + updated.set_priority(priority)?; + } + if let Some(auto_extract) = cmd.patch.auto_extract { + updated.set_auto_extract(auto_extract); + } + + repo.save(&updated)?; + self.event_bus().publish(DomainEvent::PackageUpdated { + id: updated.id().clone(), + }); + Ok(()) + } +} + +fn clone_for_update(existing: &Package) -> Package { + Package::reconstruct( + existing.id().clone(), + existing.name().to_string(), + existing.source_type(), + existing.folder_path().map(str::to_string), + existing.password().map(str::to_string), + existing.auto_extract(), + existing.priority(), + existing.created_at(), + ) + .expect("existing package always validates") +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use super::super::{CreatePackageCommand, PackagePatch, UpdatePackageCommand}; + use crate::application::commands::tests_support::{ + CapturingEventBus, InMemoryCredentialStore, InMemoryDownloadRepo, InMemoryPackageRepo, + build_package_bus, + }; + use crate::application::error::AppError; + use crate::domain::event::DomainEvent; + use crate::domain::model::package::{PackageId, PackageSourceType}; + use crate::domain::ports::driven::PackageRepository; + + async fn seed_package(bus: &crate::application::command_bus::CommandBus) -> PackageId { + bus.handle_create_package(CreatePackageCommand { + name: "Initial".into(), + source_type: PackageSourceType::Manual, + folder_path: None, + created_at_ms: 1, + }) + .await + .expect("seed package") + } + + #[tokio::test] + async fn test_update_package_renames_and_emits_event() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events.clone(), dl_repo); + let id = seed_package(&bus).await; + + bus.handle_update_package(UpdatePackageCommand { + id: id.clone(), + patch: PackagePatch { + name: Some("Renamed".into()), + ..Default::default() + }, + }) + .await + .expect("update"); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.name(), "Renamed"); + assert!(events.snapshot().iter().any(|e| matches!( + e, + DomainEvent::PackageUpdated { id: x } if x == &id + ))); + } + + #[tokio::test] + async fn test_update_package_changes_folder_priority_and_auto_extract() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo); + let id = seed_package(&bus).await; + + bus.handle_update_package(UpdatePackageCommand { + id: id.clone(), + patch: PackagePatch { + name: None, + folder_path: Some(Some("/srv/packs".into())), + priority: Some(9), + auto_extract: Some(false), + }, + }) + .await + .expect("update"); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.folder_path(), Some("/srv/packs")); + assert_eq!(stored.priority(), 9); + assert!(!stored.auto_extract()); + } + + #[tokio::test] + async fn test_update_package_clears_folder_when_some_none() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo); + let id = seed_package(&bus).await; + bus.handle_update_package(UpdatePackageCommand { + id: id.clone(), + patch: PackagePatch { + folder_path: Some(Some("/x".into())), + ..Default::default() + }, + }) + .await + .unwrap(); + + bus.handle_update_package(UpdatePackageCommand { + id: id.clone(), + patch: PackagePatch { + folder_path: Some(None), + ..Default::default() + }, + }) + .await + .unwrap(); + + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert!(stored.folder_path().is_none()); + } + + #[tokio::test] + async fn test_update_package_blank_name_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo); + let id = seed_package(&bus).await; + + let err = bus + .handle_update_package(UpdatePackageCommand { + id: id.clone(), + patch: PackagePatch { + name: Some(" ".into()), + ..Default::default() + }, + }) + .await + .expect_err("blank rejected"); + assert!(matches!(err, AppError::Validation(_))); + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.name(), "Initial"); + } + + #[tokio::test] + async fn test_update_package_invalid_priority_rejected() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo.clone(), creds, events, dl_repo); + let id = seed_package(&bus).await; + + let err = bus + .handle_update_package(UpdatePackageCommand { + id: id.clone(), + patch: PackagePatch { + priority: Some(0), + ..Default::default() + }, + }) + .await + .expect_err("0 rejected"); + assert!(matches!(err, AppError::Domain(_))); + let stored = repo.find_by_id(&id).unwrap().unwrap(); + assert_eq!(stored.priority(), 5); + } + + #[tokio::test] + async fn test_update_package_unknown_id_returns_not_found() { + let repo = Arc::new(InMemoryPackageRepo::new()); + let creds = Arc::new(InMemoryCredentialStore::new()); + let dl_repo = Arc::new(InMemoryDownloadRepo::new()); + let events = Arc::new(CapturingEventBus::new()); + let bus = build_package_bus(repo, creds, events, dl_repo); + let err = bus + .handle_update_package(UpdatePackageCommand { + id: PackageId::new("ghost"), + patch: PackagePatch { + name: Some("X".into()), + ..Default::default() + }, + }) + .await + .expect_err("missing"); + assert!(matches!(err, AppError::NotFound(_))); + } +} diff --git a/src-tauri/src/domain/event.rs b/src-tauri/src/domain/event.rs index 61c8382..a81b5d6 100644 --- a/src-tauri/src/domain/event.rs +++ b/src-tauri/src/domain/event.rs @@ -206,6 +206,22 @@ pub enum DomainEvent { id: PackageId, name: String, }, + /// Emitted whenever a package's metadata (name / folder / priority / + /// auto-extract / password / membership) changes. Fine-grained per- + /// child events (e.g. `DownloadPrioritySet`, `DownloadDirectoryChanged`) + /// are still emitted alongside this carrier so the queue manager + /// re-schedules normally. + PackageUpdated { + id: PackageId, + }, + /// Emitted after a `DeletePackageCommand` removed the package row. + /// `delete_downloads` mirrors the command flag so subscribers can + /// distinguish "package detached, downloads kept" from "everything + /// gone" without re-reading the repo. + PackageDeleted { + id: PackageId, + delete_downloads: bool, + }, // Clipboard ClipboardUrlDetected { @@ -392,6 +408,32 @@ mod tests { ); } + #[test] + fn test_package_updated_event_carries_id() { + let event = DomainEvent::PackageUpdated { + id: PackageId::new("pkg-up"), + }; + let s = format!("{event:?}"); + assert!(s.contains("PackageUpdated")); + assert!(s.contains("pkg-up")); + } + + #[test] + fn test_package_deleted_event_carries_id_and_cascade_flag() { + let cascade = DomainEvent::PackageDeleted { + id: PackageId::new("pkg-del"), + delete_downloads: true, + }; + let detach = DomainEvent::PackageDeleted { + id: PackageId::new("pkg-del"), + delete_downloads: false, + }; + assert_ne!(cascade, detach); + let s = format!("{cascade:?}"); + assert!(s.contains("PackageDeleted")); + assert!(s.contains("pkg-del")); + } + #[test] fn test_clipboard_url_detected_event() { let event = DomainEvent::ClipboardUrlDetected { diff --git a/src-tauri/src/domain/ports/driven/package_repository.rs b/src-tauri/src/domain/ports/driven/package_repository.rs index 4902562..4b94bad 100644 --- a/src-tauri/src/domain/ports/driven/package_repository.rs +++ b/src-tauri/src/domain/ports/driven/package_repository.rs @@ -35,4 +35,30 @@ pub trait PackageRepository: Send + Sync { /// surface them in scheduling order. Returns an empty vector when /// no download references the package. fn list_downloads(&self, id: &PackageId) -> Result, DomainError>; + + /// Set `downloads.package_id = package_id` for the given download. + /// Idempotent — re-attaching a download already in the package is a + /// no-op. Implementations must surface a [`DomainError::NotFound`] + /// when the download row does not exist so handlers can surface a + /// clean validation error to the IPC layer. + fn attach_download( + &self, + package_id: &PackageId, + download_id: DownloadId, + ) -> Result<(), DomainError>; + + /// Set `downloads.package_id = NULL` for the given download. Idempotent + /// — succeeds silently when the row is missing or already detached. + fn detach_download(&self, download_id: DownloadId) -> Result<(), DomainError>; + + /// Return the package id currently owning the given download (FK + /// `downloads.package_id`). Returns `Ok(None)` when the download is + /// loose, when its row is missing, or when the download row predates + /// the package_id column — callers must treat all three as "no + /// owning package" so membership checks stay decoupled from row + /// existence checks. + fn find_package_of_download( + &self, + download_id: DownloadId, + ) -> Result, DomainError>; } diff --git a/src-tauri/src/domain/ports/driven/tests.rs b/src-tauri/src/domain/ports/driven/tests.rs index ab3613c..1322f82 100644 --- a/src-tauri/src/domain/ports/driven/tests.rs +++ b/src-tauri/src/domain/ports/driven/tests.rs @@ -746,7 +746,7 @@ impl InMemoryPackageRepository { } } - fn attach_download(&self, package_id: &PackageId, queue_position: i64, download: DownloadId) { + fn seed_member(&self, package_id: &PackageId, queue_position: i64, download: DownloadId) { self.members .lock() .unwrap() @@ -810,6 +810,61 @@ impl PackageRepository for InMemoryPackageRepository { members.sort_by(|(qa, da), (qb, db)| qa.cmp(qb).then_with(|| da.0.cmp(&db.0))); Ok(members.into_iter().map(|(_, id)| id).collect()) } + + fn attach_download( + &self, + package_id: &PackageId, + download_id: DownloadId, + ) -> Result<(), DomainError> { + let mut guard = self.members.lock().unwrap(); + // Same-package reattach must be a true no-op so the mock mirrors + // the FK-singleton semantics of the SQL adapter (which never + // rewrites `queue_position` on `UPDATE ... WHERE package_id = + // same`). Detach from foreign packages first, bail if the + // download is already in the target bucket so its existing + // position survives. + let already_in_target = guard + .get(package_id) + .is_some_and(|entries| entries.iter().any(|(_, id)| id == &download_id)); + for (pkg, entries) in guard.iter_mut() { + if pkg != package_id { + entries.retain(|(_, id)| id != &download_id); + } + } + if already_in_target { + return Ok(()); + } + let bucket = guard.entry(package_id.clone()).or_default(); + let next_position = bucket + .iter() + .map(|(p, _)| *p) + .max() + .map(|m| m + 1) + .unwrap_or(0); + bucket.push((next_position, download_id)); + Ok(()) + } + + fn detach_download(&self, download_id: DownloadId) -> Result<(), DomainError> { + let mut guard = self.members.lock().unwrap(); + for entries in guard.values_mut() { + entries.retain(|(_, id)| id != &download_id); + } + Ok(()) + } + + fn find_package_of_download( + &self, + download_id: DownloadId, + ) -> Result, DomainError> { + let guard = self.members.lock().unwrap(); + for (pkg, entries) in guard.iter() { + if entries.iter().any(|(_, id)| id == &download_id) { + return Ok(Some(pkg.clone())); + } + } + Ok(None) + } } #[test] @@ -1036,7 +1091,7 @@ fn in_memory_package_repository_delete_drops_member_attachments() { 0, ); repo.save(&pkg).unwrap(); - repo.attach_download(&PackageId::new("pkg-del"), 0, DownloadId(1)); + repo.seed_member(&PackageId::new("pkg-del"), 0, DownloadId(1)); assert_eq!( repo.list_downloads(&PackageId::new("pkg-del")) .unwrap() @@ -1068,8 +1123,8 @@ fn in_memory_package_repository_list_downloads_returns_attached_ids() { 0, )) .unwrap(); - repo.attach_download(&pkg_id, 0, DownloadId(7)); - repo.attach_download(&pkg_id, 1, DownloadId(11)); + repo.seed_member(&pkg_id, 0, DownloadId(7)); + repo.seed_member(&pkg_id, 1, DownloadId(11)); let members = repo.list_downloads(&pkg_id).unwrap(); assert_eq!(members, vec![DownloadId(7), DownloadId(11)]); @@ -1081,6 +1136,115 @@ fn in_memory_package_repository_list_downloads_returns_attached_ids() { ); } +#[test] +fn in_memory_package_repository_attach_download_via_trait_adds_member() { + let repo = InMemoryPackageRepository::new(); + let pkg_id = PackageId::new("pkg-att"); + repo.save(&Package::new( + pkg_id.clone(), + "Att".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + repo.attach_download(&pkg_id, DownloadId(1)).unwrap(); + repo.attach_download(&pkg_id, DownloadId(2)).unwrap(); + let members = repo.list_downloads(&pkg_id).unwrap(); + assert_eq!(members, vec![DownloadId(1), DownloadId(2)]); +} + +#[test] +fn in_memory_package_repository_attach_download_moves_from_other_package() { + let repo = InMemoryPackageRepository::new(); + let a = PackageId::new("pkg-a"); + let b = PackageId::new("pkg-b"); + repo.save(&Package::new( + a.clone(), + "A".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + repo.save(&Package::new( + b.clone(), + "B".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + repo.attach_download(&a, DownloadId(99)).unwrap(); + repo.attach_download(&b, DownloadId(99)).unwrap(); + assert!(repo.list_downloads(&a).unwrap().is_empty()); + assert_eq!(repo.list_downloads(&b).unwrap(), vec![DownloadId(99)]); +} + +#[test] +fn in_memory_package_repository_detach_download_removes_member() { + let repo = InMemoryPackageRepository::new(); + let pkg_id = PackageId::new("pkg-det"); + repo.save(&Package::new( + pkg_id.clone(), + "Det".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + repo.attach_download(&pkg_id, DownloadId(5)).unwrap(); + repo.detach_download(DownloadId(5)).unwrap(); + assert!(repo.list_downloads(&pkg_id).unwrap().is_empty()); + // Idempotent: detaching again is a no-op. + repo.detach_download(DownloadId(5)).unwrap(); +} + +#[test] +fn in_memory_package_repository_attach_download_same_package_preserves_position() { + let repo = InMemoryPackageRepository::new(); + let pkg_id = PackageId::new("pkg-stable"); + repo.save(&Package::new( + pkg_id.clone(), + "Stable".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + repo.attach_download(&pkg_id, DownloadId(10)).unwrap(); + repo.attach_download(&pkg_id, DownloadId(20)).unwrap(); + + // Re-attach the first download; without the no-op guard it would + // shift to the end of the bucket and inflate queue_position. + repo.attach_download(&pkg_id, DownloadId(10)).unwrap(); + + let members = repo.list_downloads(&pkg_id).unwrap(); + assert_eq!( + members, + vec![DownloadId(10), DownloadId(20)], + "same-package reattach must not reorder existing members" + ); +} + +#[test] +fn in_memory_package_repository_find_package_of_download_returns_owner() { + let repo = InMemoryPackageRepository::new(); + let pkg_id = PackageId::new("pkg-find"); + repo.save(&Package::new( + pkg_id.clone(), + "Find".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + repo.attach_download(&pkg_id, DownloadId(42)).unwrap(); + + let owner = repo.find_package_of_download(DownloadId(42)).unwrap(); + assert_eq!(owner, Some(pkg_id)); + assert!( + repo.find_package_of_download(DownloadId(404)) + .unwrap() + .is_none(), + "missing or loose downloads return None" + ); +} + #[test] fn in_memory_package_repository_list_downloads_orders_by_queue_position() { // Mock must mirror the SQLite adapter's contract: members come back @@ -1096,9 +1260,9 @@ fn in_memory_package_repository_list_downloads_orders_by_queue_position() { )) .unwrap(); // Insert out of order on purpose. - repo.attach_download(&pkg_id, 5, DownloadId(50)); - repo.attach_download(&pkg_id, 1, DownloadId(10)); - repo.attach_download(&pkg_id, 3, DownloadId(30)); + repo.seed_member(&pkg_id, 5, DownloadId(50)); + repo.seed_member(&pkg_id, 1, DownloadId(10)); + repo.seed_member(&pkg_id, 3, DownloadId(30)); let members = repo.list_downloads(&pkg_id).unwrap(); assert_eq!( diff --git a/src-tauri/src/lib.rs b/src-tauri/src/lib.rs index 45d661d..9c36ab8 100644 --- a/src-tauri/src/lib.rs +++ b/src-tauri/src/lib.rs @@ -79,7 +79,9 @@ pub use adapters::driving::tauri_ipc::{ download_reorder_queue, download_resume, download_resume_all, download_retry, download_set_priority, download_start, download_verify_checksum, history_clear, history_delete_entry, history_export, history_get_by_id, history_list, - history_purge_older_than, history_search, link_resolve, plugin_config_get, + history_purge_older_than, history_search, link_resolve, package_add_download, package_create, + package_delete, package_move_to_folder, package_remove_download, package_set_password, + package_set_priority, package_toggle_auto_extract, package_update, plugin_config_get, plugin_config_update, plugin_disable, plugin_enable, plugin_install, plugin_list, plugin_report_broken, plugin_store_install, plugin_store_list, plugin_store_refresh, plugin_store_update, plugin_uninstall, reveal_in_folder, settings_get, settings_update, @@ -162,6 +164,8 @@ pub fn run() { let stats_repo: Arc = Arc::new(SqliteStatsRepo::new(db.clone())); let account_repo: Arc = Arc::new(SqliteAccountRepo::new(db.clone())); + let package_repo: Arc = + Arc::new(SqlitePackageRepo::new(db.clone())); // ── Plugin system ─────────────────────────────────────── let shared_resources = Arc::new(SharedHostResources::new()); @@ -370,6 +374,7 @@ pub fn run() { .with_plugin_config_store(plugin_config_store.clone()) .with_account_repo(account_repo.clone()) .with_account_credential_store(account_credential_store) + .with_package_repo(package_repo.clone()) .with_passphrase_codec(passphrase_codec), ); @@ -567,6 +572,15 @@ pub fn run() { account_list, account_get, account_traffic_get, + package_create, + package_update, + package_delete, + package_set_password, + package_set_priority, + package_move_to_folder, + package_toggle_auto_extract, + package_add_download, + package_remove_download, ]) .run(tauri::generate_context!()) // Tauri's run() has no meaningful recovery path — panic is intentional here