diff --git a/CHANGELOG.md b/CHANGELOG.md index 8feecd6..6e23093 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added +- **Packages queries** (PRD §6.3, PRD-v2 §P1.9, task 28): three CQRS query handlers (`list_packages`, `get_package`, `list_package_downloads`) wired through the `QueryBus` builder via a new `with_package_read_repo` setter. New driven port `PackageReadRepository` (`find_packages` / `find_package_by_id` / `find_package_downloads`) and `SqlitePackageReadRepo` adapter compute every package statistic (`downloads_count`, `total_bytes`, `downloaded_bytes`, `progress_percent`, `all_completed`) in a single `LEFT JOIN packages → downloads` with `GROUP BY p.id` so listing N packages costs one round-trip instead of `N+1`. `PackageFilter { source_type?, name_q? }` AND-combines filters: `source_type` is an exact match against the lowercase wire form (`container` / `playlist` / `manual` / `split_archive`) and is delegated to the SQL `WHERE` clause, while `name_q` is a case-insensitive substring (`LOWER(p.name) LIKE %?%`) so the UI can fuzzy-search package titles. Blank / whitespace-only `name_q` is treated as "no filter" so the UI can blindly forward an empty input. Aggregate progress mirrors the per-download formula (`Completed` always reports 100 even when `downloaded < total`, unknown total reports 0, otherwise `downloaded / total * 100` rounded to 1 dp); `all_completed` flips to `true` only when the package has at least one member and every member is in the `Completed` state. New read model `PackageViewDto` (`#[serde(rename_all = "camelCase")]`) re-exposes the aggregated `PackageView` to the frontend with no password / credential reference field, by construction. `list_package_downloads(id)` reuses the existing `DownloadView` so the React layer can render member rows with the same component as the main downloads list. Three Tauri IPC commands (`package_list`, `package_get`, `package_list_downloads`) registered in `invoke_handler!` and re-exported from `lib.rs`; `package_list` validates an unknown `source_type` argument up-front so callers see "invalid package source type" instead of an empty result. The runtime now wires `SqlitePackageReadRepo` to the `QueryBus` via `with_package_read_repo`. Twenty-three new unit + integration tests cover the three acceptance criteria (SQL-side stats with no N+1, fuzzy `name_q`, in-memory SQLite fixtures): aggregate vs empty package, mixed-state aggregation, all-completed flip, unknown total treated as zero, deterministic ordering by `(created_at, id)`, exact `source_type` filter, case-insensitive substring `name_q`, AND combination, blank `name_q` ignored, missing-id `None`, member ordering by `queue_position` then `id`, no leak across packages, validation errors when the read repo is missing, and DTO camelCase + no-password serialization assertions. Unblocks task 29 (Vue Packages React). - **Packages commands** (PRD §6.3, PRD-v2 §P1.8, task 27): nine command handlers wired to the new `PackageRepository` and the existing `CredentialStore` via a `with_package_repo` builder on the `CommandBus`. `create_package(name, source_type, folder_path?)` generates a UUID v4 id, validates the trimmed name is non-empty, persists the aggregate and emits `DomainEvent::PackageCreated`. `update_package(id, PackagePatch)` applies a partial mutation (rename / folder / priority / auto_extract) — `folder_path` accepts `Some(Some(path))` to set, `Some(None)` to clear, `None` to leave untouched, so the frontend can distinguish "set to empty" from "unchanged". `delete_package(id, delete_downloads)` runs in two cascade modes: `false` (default) detaches every member via `PackageRepository::detach_download` so the downloads survive as standalone rows, `true` removes each member through the existing `RemoveDownloadCommand` (deletes engine state, files, and the SQLite row) before dropping the package row; the keyring entry under `vortex.package.` is best-effort cleaned in both cases. `set_package_password(id, Option)` stores the secret in the OS keyring via `CredentialStore::store("vortex.package.", …)` and only writes the keyring service key (never the plaintext) onto the `packages.password` SQLite column as a marker; passing `None` clears both the keyring entry and the marker idempotently, and an explicit empty string is rejected as a validation error so callers cannot ambiguously "clear by emptying". `set_package_priority(id, priority)` validates the value through the domain `Priority` aggregate up-front (so a bad input never produces partial cascade state), persists the new value on the package row and then loops through every member returned by `list_downloads` to update each download's `priority` and emit a `DownloadPrioritySet` event per child — dangling FK members (download row missing) are skipped with a debug log instead of aborting the cascade. `move_package_to_folder(id, new_folder)` updates the package row's `folder_path` and re-uses task 13's `ChangeDirectoryCommand` for each member; per-child failures are collected into a `PackageMoveOutcome { moved, failed }` and surfaced to the frontend so partial failures don't roll back the package update. `toggle_package_auto_extract(id)` flips the flag and returns the new state. `add_download_to_package(package_id, download_id)` and `remove_download_from_package(package_id, download_id)` set / clear the FK on `downloads.package_id` via the new `attach_download` / `detach_download` trait methods; both validate the package exists first so the IPC layer surfaces a clean `NotFound` for stale callers, and `attach_download` also requires the download to exist (re-attaching is idempotent). The `PackageRepository` trait gains `attach_download(&PackageId, DownloadId) -> Result<(), DomainError>` (returns `NotFound` when the download row is missing) and `detach_download(DownloadId) -> Result<(), DomainError>` (idempotent, no-op on missing row); the `SqlitePackageRepo` adapter implements both via raw `UPDATE downloads SET package_id = ? WHERE id = ?` so the FK singleton semantics match the existing `ON DELETE SET NULL` migration. Two new `DomainEvent` variants — `PackageUpdated { id }` (rename / folder / priority / password / auto_extract / membership change) and `PackageDeleted { id, delete_downloads }` (the flag mirrors the command so subscribers distinguish "package detached, downloads kept" from "everything gone" without re-reading the repo) — are forwarded by the Tauri bridge as `package-updated` and `package-deleted` (camelCase `deleteDownloads`). Nine Tauri IPC commands (`package_create`, `package_update`, `package_delete`, `package_set_password`, `package_set_priority`, `package_move_to_folder`, `package_toggle_auto_extract`, `package_add_download`, `package_remove_download`) registered in `invoke_handler!` and re-exported from `lib.rs`, with a new `PackagePatchDto` deserialiser whose `folder_path: Option>` round-trips the three-state semantics from the frontend. The runtime now wires `SqlitePackageRepo` into the `CommandBus` via `with_package_repo`. Forty-three new unit tests against `InMemoryPackageRepo` / `InMemoryDownloadRepo` / `InMemoryCredentialStore` mocks cover every acceptance criterion: CRUD round-trip, cascade-delete vs detach, keyring-only password storage (the `packages.password` column never holds the plaintext), per-child `DownloadPrioritySet` cascade with an explicit count assertion, partial-failure outcome shape on bulk move, idempotent attach/detach, dangling-FK skip on the priority cascade, and validation paths for blank names, empty-string passwords, invalid priorities, missing repos, and unknown ids. Adapter coverage hovers at 95-99 % per file (well above the 85 % threshold). Five SQLite-level tests pin the new attach/detach semantics on a real in-memory DB. Unblocks task 29 (Vue Packages React). - **Packages persistence** (PRD §6.3, PRD-v2 §P1.7, task 26): SQLite `packages` table (migration `m20260429_000007`) with the schema mandated by PRD-v2 §8 P1 — `id TEXT PRIMARY KEY`, `name`, `source_type` (`container` / `playlist` / `manual` / `split_archive`), nullable `folder_path`, nullable `password` (keyring ref), `auto_extract` (default `1`), `priority` (default `5`), `created_at`. The legacy stub `packages` table from migration 1 (BIGINT id, name only, never wired) is dropped and recreated. The migration also adds `downloads.package_id TEXT REFERENCES packages(id) ON DELETE SET NULL` plus the `idx_downloads_package` index, so deleting a package detaches its members without losing the rows. New `PackageRepository` driven port (`save` / `find_by_id` / `list` / `delete` / `list_downloads`) and `SqlitePackageRepo` adapter with sea-orm entity + `from_domain` / `into_domain` converters. Upserts preserve the original `created_at` so list ordering stays stable across re-saves; `list` orders by `(created_at asc, id asc)`; `list_downloads` orders by `queue_position asc, id asc` so the caller surfaces members in scheduling order. Domain `Package` aggregate gained the new persisted fields plus a `PackageId(String)` typed wrapper and a `PackageSourceType` enum (round-trips via `Display` / `FromStr`); `download_ids` stays in-memory (the FK on `downloads.package_id` is the source of truth on disk). `DomainEvent::PackageCreated.id` switches from `u64` to `PackageId` to match. Twenty-one new unit tests cover the four acceptance criteria (fresh + existing-DB migration, FK `ON DELETE SET NULL` semantics, full-field round-trip, ≥85 % adapter coverage), plus error paths (unknown `source_type`, priority overflow, `created_at` overflow), source-type round-trip per variant, optional fields persisting as `NULL`, `list_downloads` filtering and ordering, and the `InMemoryPackageRepository` mock used by future command / query handlers. Unblocks tasks 27 (Commands Packages), 28 (Queries Packages), 30 (auto-grouping playlist) and 31 (auto-grouping split archives). - **Account rotation on quota** (PRD §6.4, PRD-v2 §P1.6, task 25): new `AccountRotator` application service detects quota exhaustion (HTTP `429` or `traffic_left` below a caller-supplied threshold via `is_quota_signal`), pulls the offending account out of rotation for a hoster-specific cooldown via `mark_exhausted(account_id, service_name, ttl_secs)`, and asks the existing `AccountSelector` for the next best candidate via `next_account(service, strategy) -> NextAccountOutcome`. The outcome enum distinguishes three caller-actionable states: `Picked(Account)` (use the credential), `NoneAvailable` (no enabled / non-expired account configured — fall back to the free path or surface a UI hint), and `AllExhausted { next_eligible_at_ms }` (every eligible account is on cooldown — stall the download in `Waiting` until the earliest deadline so the scheduler can retry without busy-looping). `NextAccountOutcome::error_message(service_name)` returns the PRD §6.4 standard wording (`"All accounts exhausted for {service}"` / `"No account available for {service}"`) so callers attaching the error to `Download.error` stay uniform across hosters. Cooldown lifecycle: `record_traffic_refresh(account_id, traffic_left, threshold)` clears the marker only when the upstream confirms `traffic_left >= threshold` (a `None` observation or below-threshold value leaves the marker in place so a hoster without a traffic counter cannot silently undo every `mark_exhausted`); `clear_exhausted(account_id)` is the explicit reset path, idempotent for unknown ids; expired entries are pruned lazily on the next `next_account` call so no background sweeper is needed. The exhaustion map sits behind a `std::sync::Mutex` in `AccountRotator` (intentionally NOT persisted in SQLite — a process restart wipes the cooldown, which is the desired behaviour for the 5-to-15-minute hoster reset window); a poisoned mutex surfaces as `AppError::Validation("exhausted accounts mutex poisoned")` so callers can distinguish "no candidate" from "internal state corrupted", matching `AccountSelector::pick_round_robin`'s contract. The `AllExhausted` deadline restricts its scan to accounts that actually belong to the queried service so a parallel-service entry cannot leak its cooldown into an unrelated answer. New `AccountSelector::select_best_excluding(service, strategy, exclude_ids)` extends the existing `select_best` with an exclude list (no caching, no behaviour change for empty `exclude`); the prior signature is now a thin wrapper. New `DomainEvent::AccountExhausted { id, service_name, exhausted_until_ms }` forwarded by the Tauri bridge as `account-exhausted` (camelCase `exhaustedUntilMs`). New transient `Account::exhausted_until: Option` field with `mark_exhausted` / `clear_exhausted` / `is_exhausted(now_ms)` / `exhausted_until()` methods — the field is reset to `None` by `Account::reconstruct` so the rotator's in-memory map remains the single source of truth even though SQLite roundtrips drop the marker. New `CommandBus::with_account_rotator` / `account_rotator()` builder & accessor wires the rotator alongside the existing `AccountSelector`. Twenty-two new unit tests cover the four acceptance criteria (`429 → next account`, `all exhausted → AllExhausted with earliest deadline`, `traffic-refresh clears cooldown when above threshold`, full rotator + selector-exclude integration), plus edge cases: zero-TTL no-op, deadline-exclusive equality, cross-service deadline isolation, `None`-traffic refresh keeps cooldown, `404` / `500` ignored by `is_quota_signal`, threshold-equality below-but-not-above, idempotent `clear_exhausted`, lazy cooldown expiry surfaces an account back into rotation. Unblocks task 38 (vortex-mod-1fichier free + premium) which is the first hoster to wire the rotation flow. diff --git a/src-tauri/src/adapters/driven/sqlite/download_read_repo.rs b/src-tauri/src/adapters/driven/sqlite/download_read_repo.rs index 6b12fab..83dfd65 100644 --- a/src-tauri/src/adapters/driven/sqlite/download_read_repo.rs +++ b/src-tauri/src/adapters/driven/sqlite/download_read_repo.rs @@ -12,8 +12,8 @@ use crate::domain::ports::driven::download_read_repository::DownloadReadReposito use super::entities::{download, download_segment}; use super::util::{ - MIN_PLAUSIBLE_UNIX_MS, block_on, infer_timestamp_ms_from_download_id, - inferred_download_created_at_order_expr, map_db_err, safe_u32, safe_u64, + block_on, inferred_download_created_at_order_expr, map_db_err, resolve_download_created_at, + safe_u32, safe_u64, }; pub struct SqliteDownloadReadRepo { @@ -27,19 +27,7 @@ impl SqliteDownloadReadRepo { } fn read_created_at(model: &download::Model) -> u64 { - let created_at = safe_u64(model.created_at); - if created_at > 0 { - created_at - } else if let Some(inferred) = infer_timestamp_ms_from_download_id(model.id) { - inferred - } else { - let updated_at = safe_u64(model.updated_at); - if updated_at > 0 { - updated_at - } else { - MIN_PLAUSIBLE_UNIX_MS - } - } + resolve_download_created_at(model.created_at, model.id, model.updated_at) } /// Compute progress percent rounded to one decimal place. @@ -354,6 +342,7 @@ impl DownloadReadRepository for SqliteDownloadReadRepo { mod tests { use super::*; use crate::adapters::driven::sqlite::download_repo::SqliteDownloadRepo; + use crate::adapters::driven::sqlite::util::MIN_PLAUSIBLE_UNIX_MS; use crate::domain::model::download::{Download, Url}; use crate::domain::model::segment::SegmentState; use crate::domain::ports::driven::download_repository::DownloadRepository; diff --git a/src-tauri/src/adapters/driven/sqlite/mod.rs b/src-tauri/src/adapters/driven/sqlite/mod.rs index 5309309..d2326d0 100644 --- a/src-tauri/src/adapters/driven/sqlite/mod.rs +++ b/src-tauri/src/adapters/driven/sqlite/mod.rs @@ -5,6 +5,7 @@ pub mod download_repo; pub mod entities; pub mod history_repo; pub mod migrations; +pub mod package_read_repo; pub mod package_repo; pub mod plugin_config_repo; pub mod progress_bridge; diff --git a/src-tauri/src/adapters/driven/sqlite/package_read_repo.rs b/src-tauri/src/adapters/driven/sqlite/package_read_repo.rs new file mode 100644 index 0000000..2e89e50 --- /dev/null +++ b/src-tauri/src/adapters/driven/sqlite/package_read_repo.rs @@ -0,0 +1,1294 @@ +//! SQLite implementation of [`PackageReadRepository`] (CQRS read side). +//! +//! Statistics (`downloads_count`, `total_bytes`, `progress_percent`, +//! `all_completed`) are computed in a single `LEFT JOIN` between +//! `packages` and `downloads` so listing N packages costs one query +//! instead of N+1. + +use std::collections::HashMap; + +use sea_orm::{ + ConnectionTrait, DatabaseConnection, EntityTrait, QueryFilter, Statement, sea_query::Value, +}; + +use crate::adapters::driven::sqlite::entities::{download, download_segment}; +use crate::adapters::driven::sqlite::util::{ + block_on, map_db_err, resolve_download_created_at, safe_u64, +}; +use crate::domain::error::DomainError; +use crate::domain::model::download::DownloadId; +use crate::domain::model::package::PackageId; +use crate::domain::model::views::{DownloadView, PackageFilter, PackageView}; +use crate::domain::ports::driven::package_read_repository::PackageReadRepository; + +pub struct SqlitePackageReadRepo { + db: DatabaseConnection, +} + +impl SqlitePackageReadRepo { + pub fn new(db: DatabaseConnection) -> Self { + Self { db } + } +} + +/// Round to one decimal place. Mirrors `download_read_repo` to keep the +/// progress display consistent across the UI. +fn round_one_dp(value: f64) -> f64 { + (value * 10.0).round() / 10.0 +} + +fn aggregate_progress_percent(downloaded: u64, total: u64, all_completed: bool) -> f64 { + if all_completed { + return 100.0; + } + if total == 0 { + return 0.0; + } + // Clamp to the documented `[0.0, 100.0]` contract: a non-completed + // member can persist `downloaded_bytes > total_bytes` (e.g. last + // segment over-fetch, mid-flight retry double-count) which would + // otherwise leak past 100% to the UI. + round_one_dp(downloaded as f64 / total as f64 * 100.0).min(100.0) +} + +/// Map an aggregated row back to a [`PackageView`]. Centralised so both +/// list and single-id paths apply the same coercion rules. +fn row_to_view(row: &sea_orm::QueryResult) -> Result { + let id: String = row.try_get_by_index(0).map_err(map_db_err)?; + let name: String = row.try_get_by_index(1).map_err(map_db_err)?; + let source_type: String = row.try_get_by_index(2).map_err(map_db_err)?; + let folder_path: Option = row.try_get_by_index(3).map_err(map_db_err)?; + let auto_extract_raw: i64 = row.try_get_by_index(4).map_err(map_db_err)?; + let priority_raw: i64 = row.try_get_by_index(5).map_err(map_db_err)?; + let created_at_raw: i64 = row.try_get_by_index(6).map_err(map_db_err)?; + let count_raw: i64 = row.try_get_by_index(7).map_err(map_db_err)?; + // SUM(...) produces NULL when no row matches the LEFT JOIN — surface + // it as 0 instead of erroring. + let total_bytes_raw: Option = row.try_get_by_index(8).map_err(map_db_err)?; + let downloaded_bytes_raw: Option = row.try_get_by_index(9).map_err(map_db_err)?; + let completed_count_raw: Option = row.try_get_by_index(10).map_err(map_db_err)?; + + let auto_extract = match auto_extract_raw { + 0 => false, + 1 => true, + other => { + return Err(DomainError::ValidationError(format!( + "package {id}: auto_extract {other} out of bool range", + ))); + } + }; + let priority = u8::try_from(priority_raw).map_err(|_| { + DomainError::ValidationError(format!( + "package {id}: priority {priority_raw} out of u8 range", + )) + })?; + if !(1..=10).contains(&priority) { + return Err(DomainError::ValidationError(format!( + "package {id}: priority {priority} outside [1, 10]", + ))); + } + let created_at = u64::try_from(created_at_raw).map_err(|_| { + DomainError::ValidationError(format!( + "package {id}: created_at {created_at_raw} out of u64 range", + )) + })?; + let downloads_count = safe_u64(count_raw); + let total_bytes = total_bytes_raw.map(safe_u64).unwrap_or(0); + let downloaded_bytes = downloaded_bytes_raw.map(safe_u64).unwrap_or(0); + let completed_count = completed_count_raw.map(safe_u64).unwrap_or(0); + let all_completed = downloads_count > 0 && completed_count == downloads_count; + let progress_percent = aggregate_progress_percent(downloaded_bytes, total_bytes, all_completed); + + Ok(PackageView { + id, + name, + source_type, + folder_path, + auto_extract, + priority, + created_at, + downloads_count, + total_bytes, + downloaded_bytes, + progress_percent, + all_completed, + }) +} + +// Both `total_bytes_sum` and `downloaded_bytes_sum` use the same +// per-row contribution for Completed members so they stay in lockstep: +// - state = 'Completed' → COALESCE(total_bytes, downloaded_bytes) +// - else → total_bytes / downloaded_bytes as stored +// Mixing the two would let a Completed member with NULL total_bytes +// add bytes to the numerator without adding any to the denominator, +// producing aggregate progress > 100% (e.g. one Completed NULL-total +// row alongside a small Downloading row would explode the percentage). +// Keeping the CASE symmetric also mirrors the per-download semantics +// in `compute_progress_percent_for_download`, which treats Completed +// as 100% regardless of the persisted bytes. +const PACKAGE_AGG_SELECT: &str = "SELECT \ + p.id, p.name, p.source_type, p.folder_path, p.auto_extract, p.priority, p.created_at, \ + COUNT(d.id) AS downloads_count, \ + COALESCE(SUM(CASE WHEN d.state = 'Completed' THEN COALESCE(d.total_bytes, d.downloaded_bytes) ELSE COALESCE(d.total_bytes, 0) END), 0) AS total_bytes_sum, \ + COALESCE(SUM(CASE WHEN d.state = 'Completed' THEN COALESCE(d.total_bytes, d.downloaded_bytes) ELSE d.downloaded_bytes END), 0) AS downloaded_bytes_sum, \ + COALESCE(SUM(CASE WHEN d.state = 'Completed' THEN 1 ELSE 0 END), 0) AS completed_count \ + FROM packages p LEFT JOIN downloads d ON d.package_id = p.id"; + +fn compute_progress_percent_for_download(state: &str, downloaded: u64, total: Option) -> f64 { + if state == "Completed" { + return 100.0; + } + match total { + // Same `[0.0, 100.0]` clamp as the package-level aggregate so a + // row whose persisted `downloaded_bytes` overshoots `total_bytes` + // does not render a `>100%` progress bar. + Some(t) if t > 0 => round_one_dp(downloaded as f64 / t as f64 * 100.0).min(100.0), + _ => 0.0, + } +} + +fn download_row_to_view( + model: &download::Model, + segments_active: u32, + segments_total: u32, +) -> Result { + let total = model.total_bytes.map(safe_u64); + let downloaded = safe_u64(model.downloaded_bytes); + let speed = safe_u64(model.speed_bytes_per_sec); + let progress_percent = compute_progress_percent_for_download(&model.state, downloaded, total); + // A `Completed` row reports 100% above, so any ETA would be + // self-contradictory ("done, X seconds remaining"). Stale rows can + // still leave `speed_bytes_per_sec > 0` with `downloaded_bytes < + // total_bytes` if the engine crashed mid-flush, so guard explicitly + // on state instead of relying on the byte counters being clean. + let eta_seconds = if model.state == "Completed" { + None + } else { + match total { + Some(t) if speed > 0 && t > downloaded => Some((t - downloaded) / speed), + _ => None, + } + }; + let state = model.state.parse().map_err(|_| { + DomainError::StorageError(format!("invalid download state in DB: {}", model.state)) + })?; + let priority_u8 = u8::try_from(model.priority).unwrap_or(5); + // Apply the same legacy fallback chain `download_read_repo` uses so + // a row that persisted `created_at = 0` does not surface a 1970 + // timestamp here while the regular Downloads view shows the + // correct id-inferred or updated_at-derived date. + let created_at = resolve_download_created_at(model.created_at, model.id, model.updated_at); + + Ok(DownloadView { + id: DownloadId(safe_u64(model.id)), + file_name: model.file_name.clone(), + url: model.url.clone(), + source_hostname: model.source_hostname.clone(), + state, + progress_percent, + speed_bytes_per_sec: speed, + downloaded_bytes: downloaded, + total_bytes: total, + eta_seconds, + segments_active, + segments_total, + module_name: model.module_name.clone(), + account_name: None, + error_message: model.error_message.clone(), + priority: priority_u8, + queue_position: model.queue_position, + created_at, + }) +} + +impl PackageReadRepository for SqlitePackageReadRepo { + fn find_packages( + &self, + filter: Option, + ) -> Result, DomainError> { + let mut sql = String::from(PACKAGE_AGG_SELECT); + let mut clauses: Vec<&'static str> = Vec::new(); + let mut params: Vec = Vec::new(); + let mut name_needle: Option = None; + if let Some(ref f) = filter { + if let Some(ref source) = f.source_type { + clauses.push("p.source_type = ?"); + params.push(Value::from(source.clone())); + } + if let Some(ref needle) = f.name_q { + let trimmed = needle.trim(); + if !trimmed.is_empty() { + // Substring matching is filtered in Rust after the + // SQL fetch — same approach as `history_repo` — + // because stock SQLite's `LOWER()` only case-folds + // ASCII (so `LOWER('CAFÉ')` stays `'CAFÉ'`) and the + // `LIKE` wildcards `%` and `_` would otherwise need + // escaping. Rust's `to_lowercase` is Unicode-aware + // and `str::contains` treats every byte literally. + name_needle = Some(trimmed.to_lowercase()); + } + } + } + if !clauses.is_empty() { + sql.push_str(" WHERE "); + sql.push_str(&clauses.join(" AND ")); + } + sql.push_str(" GROUP BY p.id ORDER BY p.created_at ASC, p.id ASC"); + + block_on(async { + let rows = self + .db + .query_all(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Sqlite, + &sql, + params, + )) + .await + .map_err(map_db_err)?; + let views: Vec = rows.iter().map(row_to_view).collect::>()?; + Ok(match name_needle { + Some(needle) => views + .into_iter() + .filter(|v| v.name.to_lowercase().contains(&needle)) + .collect(), + None => views, + }) + }) + } + + fn find_package_by_id(&self, id: &PackageId) -> Result, DomainError> { + let sql = format!("{PACKAGE_AGG_SELECT} WHERE p.id = ? GROUP BY p.id"); + let id_value = id.as_str().to_string(); + block_on(async { + let row = self + .db + .query_one(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Sqlite, + &sql, + [Value::from(id_value)], + )) + .await + .map_err(map_db_err)?; + match row { + None => Ok(None), + Some(r) => Ok(Some(row_to_view(&r)?)), + } + }) + } + + fn find_package_downloads(&self, id: &PackageId) -> Result, DomainError> { + use sea_orm::ColumnTrait; + + let id_value = id.as_str().to_string(); + block_on(async { + // The `download::Model` does not yet expose `package_id` as a + // typed sea-orm column (the FK was added in a later + // migration), so resolve member ids through raw SQL — same + // approach `SqlitePackageRepo::list_downloads` uses on the + // write side. + let id_rows = self + .db + .query_all(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Sqlite, + "SELECT id FROM downloads WHERE package_id = ? ORDER BY queue_position ASC, id ASC", + [Value::from(id_value)], + )) + .await + .map_err(map_db_err)?; + + if id_rows.is_empty() { + return Ok(Vec::new()); + } + + let download_ids: Vec = id_rows + .iter() + .map(|r| r.try_get_by_index::(0).map_err(map_db_err)) + .collect::, _>>()?; + + let mut downloads: Vec = Vec::with_capacity(download_ids.len()); + for chunk in download_ids.chunks(SQLITE_IN_CHUNK) { + let page = download::Entity::find() + .filter(download::Column::Id.is_in(chunk.to_vec())) + .all(&self.db) + .await + .map_err(map_db_err)?; + downloads.extend(page); + } + + let mut seg_map: HashMap = HashMap::new(); + for chunk in download_ids.chunks(SQLITE_IN_CHUNK) { + let segments = download_segment::Entity::find() + .filter(download_segment::Column::DownloadId.is_in(chunk.to_vec())) + .all(&self.db) + .await + .map_err(map_db_err)?; + for seg in &segments { + let entry = seg_map.entry(seg.download_id).or_insert((0, 0)); + entry.1 = entry.1.saturating_add(1); + if seg.state == "Downloading" { + entry.0 = entry.0.saturating_add(1); + } + } + } + + // Map by id for stable lookup, then re-emit in the order the + // raw query produced (queue_position ASC, id ASC). + let mut by_id: HashMap = HashMap::new(); + for d in &downloads { + by_id.insert(d.id, d); + } + + download_ids + .iter() + .filter_map(|id| by_id.get(id).copied()) + .map(|d| { + let (active, total) = seg_map.get(&d.id).copied().unwrap_or((0, 0)); + download_row_to_view(d, active, total) + }) + .collect() + }) + } +} + +/// Maximum number of host parameters bound in a single `IN (...)` query +/// to stay below SQLite's `SQLITE_MAX_VARIABLE_NUMBER` ceiling. The +/// modern default is 32 766 but older builds (and embedded targets) +/// floor at 999, so we pick a value well below both. Used to chunk the +/// member-download lookup in `find_package_downloads` for packages with +/// thousands of attached items (bulk RSS / podcast imports). +const SQLITE_IN_CHUNK: usize = 900; + +#[cfg(test)] +mod tests { + use super::*; + use crate::adapters::driven::sqlite::connection::setup_test_db; + use crate::adapters::driven::sqlite::package_repo::SqlitePackageRepo; + use crate::domain::model::package::{Package, PackageId, PackageSourceType}; + use crate::domain::ports::driven::package_repository::PackageRepository; + use sea_orm::{ConnectionTrait, DatabaseConnection, Statement}; + + fn make_package(id: &str, name: &str, source: PackageSourceType, created: u64) -> Package { + Package::new(PackageId::new(id), name.to_string(), source, created) + } + + async fn insert_download( + db: &DatabaseConnection, + id: i64, + package_id: Option<&str>, + state: &str, + total: Option, + downloaded: i64, + queue_position: i64, + ) { + let pkg = match package_id { + Some(p) => format!("'{p}'"), + None => "NULL".to_string(), + }; + let total_sql = match total { + Some(t) => t.to_string(), + None => "NULL".to_string(), + }; + let sql = format!( + "INSERT INTO downloads (id, url, file_name, state, priority, queue_position, total_bytes, downloaded_bytes, speed_bytes_per_sec, retry_count, max_retries, segments_count, source_hostname, protocol, resume_supported, destination_path, created_at, updated_at, package_id) VALUES ({id}, 'https://example.com/f.zip', 'f.zip', '{state}', 5, {queue_position}, {total_sql}, {downloaded}, 0, 0, 5, 1, 'example.com', 'https', 0, '/tmp', 1, 1, {pkg})" + ); + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + sql, + )) + .await + .expect("seed download"); + } + + /// Override `speed_bytes_per_sec` after `insert_download` seeded a + /// row. Kept as a one-off because every other test wants + /// `speed = 0`; only the ETA edge case needs a non-zero stale value. + async fn set_download_speed(db: &DatabaseConnection, id: i64, speed_bytes_per_sec: i64) { + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + format!( + "UPDATE downloads SET speed_bytes_per_sec = {speed_bytes_per_sec} WHERE id = {id}" + ), + )) + .await + .expect("update speed"); + } + + /// Force a row's `created_at` and `updated_at` columns. The default + /// `insert_download` writes `1` to both; a few tests need to assert + /// the legacy fallback chain (`created_at = 0` triggers the id / + /// updated_at chain in `resolve_download_created_at`). + async fn set_download_timestamps( + db: &DatabaseConnection, + id: i64, + created_at: i64, + updated_at: i64, + ) { + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + format!( + "UPDATE downloads SET created_at = {created_at}, updated_at = {updated_at} WHERE id = {id}" + ), + )) + .await + .expect("update timestamps"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_returns_empty_when_no_packages() { + let db = setup_test_db().await.expect("test db"); + let read = SqlitePackageReadRepo::new(db); + let result = read.find_packages(None).expect("find_packages"); + assert!(result.is_empty()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_returns_view_with_zero_stats_for_empty_package() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package("p-1", "Solo", PackageSourceType::Manual, 100)) + .expect("save"); + + let result = read.find_packages(None).unwrap(); + assert_eq!(result.len(), 1); + let v = &result[0]; + assert_eq!(v.id, "p-1"); + assert_eq!(v.name, "Solo"); + assert_eq!(v.source_type, "manual"); + assert!(v.folder_path.is_none()); + assert!(v.auto_extract); + assert_eq!(v.priority, 5); + assert_eq!(v.created_at, 100); + assert_eq!(v.downloads_count, 0); + assert_eq!(v.total_bytes, 0); + assert_eq!(v.downloaded_bytes, 0); + assert_eq!(v.progress_percent, 0.0); + assert!(!v.all_completed, "empty package must not report completed"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_aggregates_member_downloads() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "agg", + "Aggregate", + PackageSourceType::Playlist, + 42, + )) + .unwrap(); + + // 3 members: 2 partially downloaded, 1 completed (100% by state). + insert_download(&db, 1, Some("agg"), "Downloading", Some(1000), 250, 0).await; + insert_download(&db, 2, Some("agg"), "Downloading", Some(2000), 500, 1).await; + insert_download(&db, 3, Some("agg"), "Completed", Some(500), 500, 2).await; + // One unattached download must NOT influence the aggregate. + insert_download(&db, 4, None, "Downloading", Some(99_999), 99_999, 9).await; + + let result = read.find_packages(None).unwrap(); + assert_eq!(result.len(), 1); + let v = &result[0]; + assert_eq!(v.id, "agg"); + assert_eq!(v.downloads_count, 3); + assert_eq!(v.total_bytes, 3500); + assert_eq!(v.downloaded_bytes, 1250); + // 1250 / 3500 = 35.714... → 35.7 + assert!( + (v.progress_percent - 35.7).abs() < 0.01, + "progress_percent = {}", + v.progress_percent + ); + assert!(!v.all_completed, "one member still Downloading"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_completed_member_with_drift_counts_full_total() { + // Regression: a Completed download whose persisted + // `downloaded_bytes` lags behind `total_bytes` (e.g. last-segment + // commit drift) is rendered as 100% by the per-row view; the + // package aggregate must agree, otherwise the UI shows a member + // at 100% sitting inside a package stuck below 100%. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "drift", + "Drift", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + + // 2 members: + // - id=20: Completed but downloaded_bytes (300) < total_bytes (1000) + // - id=21: Downloading at 250 / 1000 + // Old aggregate: SUM(downloaded_bytes) = 550, total = 2000 → 27.5% + // Fixed aggregate: 1000 (completed contributes total) + 250 = 1250 → 62.5% + insert_download(&db, 20, Some("drift"), "Completed", Some(1000), 300, 0).await; + insert_download(&db, 21, Some("drift"), "Downloading", Some(1000), 250, 1).await; + + let v = &read.find_packages(None).unwrap()[0]; + assert_eq!(v.downloads_count, 2); + assert_eq!(v.total_bytes, 2000); + assert_eq!( + v.downloaded_bytes, 1250, + "completed member must contribute full total_bytes, not stale downloaded_bytes", + ); + assert!( + (v.progress_percent - 62.5).abs() < 0.01, + "progress_percent = {}", + v.progress_percent, + ); + assert!(!v.all_completed); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_completed_member_without_total_falls_back_to_downloaded() { + // Edge case: a Completed download with total_bytes = NULL + // (extractor could not announce a size). Both numerator and + // denominator must fall back to `downloaded_bytes` for that + // row so the aggregate stays self-consistent — otherwise mixing + // it with another row would push `progress_percent > 100`. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "no-total-completed", + "Untracked done", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + insert_download( + &db, + 30, + Some("no-total-completed"), + "Completed", + None, + 777, + 0, + ) + .await; + + let v = &read.find_packages(None).unwrap()[0]; + assert_eq!(v.downloads_count, 1); + assert_eq!( + v.total_bytes, 777, + "completed NULL-total row contributes downloaded_bytes to the denominator", + ); + assert_eq!( + v.downloaded_bytes, 777, + "completed NULL-total row contributes downloaded_bytes to the numerator", + ); + // Single member, all_completed → 100% via the existing branch. + assert!(v.all_completed); + assert_eq!(v.progress_percent, 100.0); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_mixed_state_with_unknown_completed_total_stays_under_100() { + // Regression: previously the numerator credited a Completed + // NULL-total row with its `downloaded_bytes` while the + // denominator credited 0 for it, so a small Downloading row + // alongside it could produce `progress_percent` well over 100% + // (e.g. 500 numerator / 100 denominator → 500%). Making both + // sides symmetric for the NULL-total Completed case keeps the + // ratio bounded. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package("mix", "Mixed", PackageSourceType::Manual, 1)) + .unwrap(); + // Completed but total_bytes = NULL, downloaded_bytes = 500. + insert_download(&db, 40, Some("mix"), "Completed", None, 500, 0).await; + // Downloading at 50 / 100 — the small known-size row that used + // to expose the asymmetry. + insert_download(&db, 41, Some("mix"), "Downloading", Some(100), 50, 1).await; + + let v = &read.find_packages(None).unwrap()[0]; + assert_eq!(v.downloads_count, 2); + assert_eq!(v.total_bytes, 600, "500 (completed fallback) + 100 (known)"); + assert_eq!( + v.downloaded_bytes, 550, + "500 (completed credited fully) + 50" + ); + assert!( + v.progress_percent <= 100.0, + "progress_percent must never exceed 100% (got {})", + v.progress_percent, + ); + // 550 / 600 = 91.666... → 91.7 after one-dp rounding. + assert!( + (v.progress_percent - 91.7).abs() < 0.01, + "progress_percent = {}", + v.progress_percent, + ); + assert!(!v.all_completed); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_all_completed_is_true_when_every_member_completed() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package("done", "Done", PackageSourceType::Manual, 7)) + .unwrap(); + insert_download(&db, 10, Some("done"), "Completed", Some(100), 100, 0).await; + insert_download(&db, 11, Some("done"), "Completed", Some(200), 200, 1).await; + + let v = &read.find_packages(None).unwrap()[0]; + assert!(v.all_completed); + assert_eq!(v.progress_percent, 100.0); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_treats_unknown_total_as_zero() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "no-total", + "Untracked", + PackageSourceType::Manual, + 10, + )) + .unwrap(); + // total_bytes = NULL must contribute 0 to the SUM. + insert_download(&db, 50, Some("no-total"), "Downloading", None, 100, 0).await; + + let v = &read.find_packages(None).unwrap()[0]; + assert_eq!(v.downloads_count, 1); + assert_eq!(v.total_bytes, 0); + assert_eq!(v.downloaded_bytes, 100); + assert_eq!( + v.progress_percent, 0.0, + "no known total => progress unknown => 0" + ); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_orders_by_created_at_then_id() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package("c", "C", PackageSourceType::Manual, 20)) + .unwrap(); + write + .save(&make_package("a", "A", PackageSourceType::Manual, 10)) + .unwrap(); + write + .save(&make_package("b", "B", PackageSourceType::Manual, 10)) + .unwrap(); + + let result = read.find_packages(None).unwrap(); + let ids: Vec<&str> = result.iter().map(|p| p.id.as_str()).collect(); + assert_eq!(ids, vec!["a", "b", "c"]); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_by_source_type() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package("m", "M", PackageSourceType::Manual, 1)) + .unwrap(); + write + .save(&make_package("p", "P", PackageSourceType::Playlist, 2)) + .unwrap(); + write + .save(&make_package("c", "C", PackageSourceType::Container, 3)) + .unwrap(); + + let result = read + .find_packages(Some(PackageFilter { + source_type: Some("playlist".to_string()), + name_q: None, + })) + .unwrap(); + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, "p"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_by_name_q_is_case_insensitive_substring() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package( + "1", + "Holiday Photos 2025", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + write + .save(&make_package( + "2", + "Music — Holidays", + PackageSourceType::Manual, + 2, + )) + .unwrap(); + write + .save(&make_package("3", "Misc", PackageSourceType::Manual, 3)) + .unwrap(); + + let result = read + .find_packages(Some(PackageFilter { + source_type: None, + name_q: Some("HOLIDAY".to_string()), + })) + .unwrap(); + let ids: Vec<&str> = result.iter().map(|p| p.id.as_str()).collect(); + assert_eq!(ids, vec!["1", "2"]); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_combines_source_and_name_q_with_and() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package( + "p1", + "Holiday Mix", + PackageSourceType::Playlist, + 1, + )) + .unwrap(); + write + .save(&make_package( + "m1", + "Holiday Manual", + PackageSourceType::Manual, + 2, + )) + .unwrap(); + + let result = read + .find_packages(Some(PackageFilter { + source_type: Some("playlist".to_string()), + name_q: Some("holiday".to_string()), + })) + .unwrap(); + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, "p1"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_blank_name_q_is_ignored() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package("p1", "X", PackageSourceType::Manual, 1)) + .unwrap(); + + let result = read + .find_packages(Some(PackageFilter { + source_type: None, + name_q: Some(" ".to_string()), + })) + .unwrap(); + assert_eq!(result.len(), 1); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_name_q_treats_percent_literally() { + // `%` is the SQL `LIKE` "match anything" wildcard. Filtering in + // Rust via `str::contains` (rather than pushing into a SQL + // pattern) means `%` is just another character, so a query for + // `100%` only matches rows whose name literally contains `100%`. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package("a", "100% off", PackageSourceType::Manual, 1)) + .unwrap(); + write + .save(&make_package( + "b", + "100 packages", + PackageSourceType::Manual, + 2, + )) + .unwrap(); + + let result = read + .find_packages(Some(PackageFilter { + source_type: None, + name_q: Some("100%".to_string()), + })) + .unwrap(); + assert_eq!(result.len(), 1, "only the literal `100%` row matches"); + assert_eq!(result[0].id, "a"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_name_q_treats_underscore_literally() { + // In SQL `LIKE`, `_` matches any single character — so `foo_bar` + // would match `foo-bar`, `foo bar`, etc. Filtering in Rust treats + // `_` literally. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package("a", "foo_bar", PackageSourceType::Manual, 1)) + .unwrap(); + write + .save(&make_package("b", "foo-bar", PackageSourceType::Manual, 2)) + .unwrap(); + write + .save(&make_package("c", "fooXbar", PackageSourceType::Manual, 3)) + .unwrap(); + + let result = read + .find_packages(Some(PackageFilter { + source_type: None, + name_q: Some("foo_bar".to_string()), + })) + .unwrap(); + assert_eq!(result.len(), 1, "only the literal underscore matches"); + assert_eq!(result[0].id, "a"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_name_q_treats_backslash_literally() { + // Filtering in Rust means `\` is just a byte; no escape sequence + // can swallow a wrapping wildcard or invalidate the pattern. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package( + "a", + r"path\to\file", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + write + .save(&make_package( + "b", + "path/to/file", + PackageSourceType::Manual, + 2, + )) + .unwrap(); + + let result = read + .find_packages(Some(PackageFilter { + source_type: None, + name_q: Some(r"\to\".to_string()), + })) + .unwrap(); + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, "a"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_filter_name_q_matches_non_ascii_case_insensitively() { + // Stock SQLite's `LOWER()` only case-folds ASCII, so a SQL + // `LOWER(p.name) LIKE` clause would miss `CAFÉ` when the user + // searches `café`. Folding both sides in Rust via + // `str::to_lowercase` (which is Unicode-aware) keeps the + // advertised case-insensitive semantics for any script. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db); + + write + .save(&make_package( + "a", + "CAFÉ Special", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + write + .save(&make_package( + "b", + "Ärger Folder", + PackageSourceType::Manual, + 2, + )) + .unwrap(); + write + .save(&make_package( + "c", + "Plain ASCII", + PackageSourceType::Manual, + 3, + )) + .unwrap(); + + let result_cafe = read + .find_packages(Some(PackageFilter { + source_type: None, + name_q: Some("café".to_string()), + })) + .unwrap(); + assert_eq!( + result_cafe.len(), + 1, + "lowercase needle must match uppercase É" + ); + assert_eq!(result_cafe[0].id, "a"); + + let result_arger = read + .find_packages(Some(PackageFilter { + source_type: None, + name_q: Some("ärger".to_string()), + })) + .unwrap(); + assert_eq!(result_arger.len(), 1); + assert_eq!(result_arger[0].id, "b"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_by_id_returns_none_when_missing() { + let db = setup_test_db().await.expect("test db"); + let read = SqlitePackageReadRepo::new(db); + let result = read + .find_package_by_id(&PackageId::new("ghost")) + .expect("query"); + assert!(result.is_none()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_by_id_returns_aggregated_view() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "single", + "Single", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + insert_download(&db, 60, Some("single"), "Downloading", Some(1000), 250, 0).await; + + let v = read + .find_package_by_id(&PackageId::new("single")) + .unwrap() + .expect("present"); + assert_eq!(v.id, "single"); + assert_eq!(v.downloads_count, 1); + assert_eq!(v.total_bytes, 1000); + assert_eq!(v.downloaded_bytes, 250); + assert!((v.progress_percent - 25.0).abs() < 0.01); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_downloads_returns_empty_for_missing_package() { + let db = setup_test_db().await.expect("test db"); + let read = SqlitePackageReadRepo::new(db); + let result = read + .find_package_downloads(&PackageId::new("never")) + .unwrap(); + assert!(result.is_empty()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_downloads_returns_members_ordered_by_queue_position() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package("ord", "Ord", PackageSourceType::Manual, 1)) + .unwrap(); + insert_download(&db, 700, Some("ord"), "Downloading", Some(100), 50, 5).await; + insert_download(&db, 701, Some("ord"), "Downloading", Some(100), 25, 1).await; + insert_download(&db, 702, Some("ord"), "Downloading", Some(100), 75, 3).await; + // Unattached must NOT appear. + insert_download(&db, 999, None, "Downloading", Some(100), 50, 0).await; + + let result = read.find_package_downloads(&PackageId::new("ord")).unwrap(); + let ids: Vec = result.iter().map(|d| d.id.0).collect(); + assert_eq!(ids, vec![701, 702, 700]); + assert!(result.iter().all(|d| d.id.0 != 999)); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_downloads_progress_matches_individual_download() { + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package("prog", "Prog", PackageSourceType::Manual, 1)) + .unwrap(); + insert_download(&db, 800, Some("prog"), "Downloading", Some(1000), 333, 0).await; + insert_download(&db, 801, Some("prog"), "Completed", Some(2000), 1500, 1).await; + + let views = read + .find_package_downloads(&PackageId::new("prog")) + .unwrap(); + assert_eq!(views.len(), 2); + // 333 / 1000 = 33.3 + assert!((views[0].progress_percent - 33.3).abs() < 0.01); + // Completed always reports 100 even when downloaded < total. + assert_eq!(views[1].progress_percent, 100.0); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_packages_progress_clamps_above_one_hundred_percent() { + // A non-Completed row whose persisted `downloaded_bytes` exceeds + // `total_bytes` (segment over-fetch, retry double-count) must + // still render as 100% — the documented contract is `[0.0, + // 100.0]`, not raw arithmetic. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "overshoot", + "Overshoot", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + // Downloading at 1500 / 1000 → 150% before clamping. + insert_download( + &db, + 900, + Some("overshoot"), + "Downloading", + Some(1000), + 1500, + 0, + ) + .await; + + let pkg = &read.find_packages(None).unwrap()[0]; + assert_eq!( + pkg.progress_percent, 100.0, + "package aggregate must clamp >100% to 100%", + ); + + let row = &read + .find_package_downloads(&PackageId::new("overshoot")) + .unwrap()[0]; + assert_eq!( + row.progress_percent, 100.0, + "per-row view must clamp >100% to 100%", + ); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_downloads_chunks_oversized_in_query() { + // With a package holding more members than `SQLITE_IN_CHUNK`, + // a single `Id IN (?, ?, ...)` would exceed + // `SQLITE_MAX_VARIABLE_NUMBER` on builds capped at 999. The + // implementation must chunk the lookup so no batch exceeds the + // limit, and still return every member in queue order. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "bulk", + "Bulk", + PackageSourceType::Playlist, + 1, + )) + .unwrap(); + + let count: i64 = (SQLITE_IN_CHUNK as i64) + 50; + for i in 0..count { + insert_download( + &db, + 10_000 + i, + Some("bulk"), + "Downloading", + Some(100), + 10, + i, + ) + .await; + } + + let views = read + .find_package_downloads(&PackageId::new("bulk")) + .unwrap(); + assert_eq!(views.len(), count as usize, "every chunked row returned"); + // Spot-check ordering at the chunk boundary: row at index 900 + // (the first of the second chunk) must keep its `queue_position` + // ASC ordering (monotonic with id since both increase together). + for (idx, view) in views.iter().enumerate() { + assert_eq!( + view.id.0, + 10_000 + idx as u64, + "chunked merge preserved insertion order at index {idx}", + ); + } + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_downloads_eta_is_none_for_completed_with_stale_speed() { + // A `Completed` row reports 100% progress, so any ETA would be + // self-contradictory. Stale persisted data can leave + // `speed_bytes_per_sec > 0` and `downloaded_bytes < total_bytes` + // (engine crashed mid-flush, manual state edit). The view must + // suppress ETA based on state, not on whether the bytes look + // clean. + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package("eta", "Eta", PackageSourceType::Manual, 1)) + .unwrap(); + // Completed with bytes that would otherwise produce a positive ETA: + // total=1000, downloaded=600, speed=100 → naive (1000-600)/100 = 4s. + insert_download(&db, 950, Some("eta"), "Completed", Some(1000), 600, 0).await; + set_download_speed(&db, 950, 100).await; + // Sanity: a Downloading row with the same numbers still gets ETA. + insert_download(&db, 951, Some("eta"), "Downloading", Some(1000), 600, 1).await; + set_download_speed(&db, 951, 100).await; + + let views = read.find_package_downloads(&PackageId::new("eta")).unwrap(); + assert_eq!(views.len(), 2); + let completed = views.iter().find(|v| v.id.0 == 950).expect("completed row"); + assert_eq!(completed.progress_percent, 100.0); + assert_eq!(completed.eta_seconds, None, "completed must suppress ETA"); + let downloading = views + .iter() + .find(|v| v.id.0 == 951) + .expect("downloading row"); + assert_eq!(downloading.eta_seconds, Some(4)); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_package_downloads_resolves_legacy_zero_created_at() { + // Legacy rows persisted before the timestamp backfill landed + // can have `created_at = 0`. The Downloads view recovers a real + // timestamp via id-inferred → updated_at → MIN_PLAUSIBLE + // fallback (`download_read_repo::read_created_at`). The + // package member view must apply the same chain or it will + // surface 1970 dates and break clients that key off + // `created_at` as a secondary sort. + use crate::adapters::driven::sqlite::util::MIN_PLAUSIBLE_UNIX_MS; + + let db = setup_test_db().await.expect("test db"); + let write = SqlitePackageRepo::new(db.clone()); + let read = SqlitePackageReadRepo::new(db.clone()); + + write + .save(&make_package( + "legacy", + "Legacy", + PackageSourceType::Manual, + 1, + )) + .unwrap(); + + // Snowflake-style id: high bits encode the creation ms. + let inferred_ms: u64 = 1_700_000_000_000; + let snowflake_id: i64 = ((inferred_ms << 12) | 1) as i64; + insert_download( + &db, + snowflake_id, + Some("legacy"), + "Downloading", + Some(100), + 10, + 0, + ) + .await; + set_download_timestamps(&db, snowflake_id, 0, 0).await; + + // No id timestamp + no updated_at → MIN_PLAUSIBLE_UNIX_MS. + let bare_id: i64 = 7; + insert_download( + &db, + bare_id, + Some("legacy"), + "Downloading", + Some(100), + 10, + 1, + ) + .await; + set_download_timestamps(&db, bare_id, 0, 0).await; + + // No id timestamp but a positive updated_at → that updated_at. + let upd_only_id: i64 = 8; + let upd_only_ts: i64 = 1_650_000_000_000; + insert_download( + &db, + upd_only_id, + Some("legacy"), + "Downloading", + Some(100), + 10, + 2, + ) + .await; + set_download_timestamps(&db, upd_only_id, 0, upd_only_ts).await; + + let views = read + .find_package_downloads(&PackageId::new("legacy")) + .unwrap(); + assert_eq!(views.len(), 3); + + let snow = views + .iter() + .find(|v| v.id.0 == snowflake_id as u64) + .expect("snowflake row"); + assert_eq!( + snow.created_at, inferred_ms, + "snowflake id must back-derive its creation ms", + ); + + let bare = views + .iter() + .find(|v| v.id.0 == bare_id as u64) + .expect("bare row"); + assert_eq!( + bare.created_at, MIN_PLAUSIBLE_UNIX_MS, + "no source of truth → sentinel anchor", + ); + + let upd = views + .iter() + .find(|v| v.id.0 == upd_only_id as u64) + .expect("updated_at row"); + assert_eq!( + upd.created_at, upd_only_ts as u64, + "fallback to updated_at when id has no embedded ms", + ); + } +} diff --git a/src-tauri/src/adapters/driven/sqlite/util.rs b/src-tauri/src/adapters/driven/sqlite/util.rs index bb93c7b..594f445 100644 --- a/src-tauri/src/adapters/driven/sqlite/util.rs +++ b/src-tauri/src/adapters/driven/sqlite/util.rs @@ -38,6 +38,33 @@ pub fn infer_timestamp_ms_from_download_id(raw_id: i64) -> Option { (ts >= MIN_PLAUSIBLE_UNIX_MS).then_some(ts) } +/// Resolve a download row's `created_at` timestamp with the same +/// fallback chain SQL ordering uses (`inferred_download_created_at_order_expr`): +/// 1. `created_at` when persisted as a positive value +/// 2. timestamp inferred from the snowflake-style id high bits +/// 3. `updated_at` when positive +/// 4. `MIN_PLAUSIBLE_UNIX_MS` (the sentinel anchor) +/// +/// Legacy rows persisted before the timestamp backfill landed often have +/// `created_at = 0`; reading the raw column there would surface "1970" +/// dates and break the secondary sort key in any view that consumes +/// download rows. +pub fn resolve_download_created_at(raw_created_at: i64, raw_id: i64, raw_updated_at: i64) -> u64 { + let created_at = safe_u64(raw_created_at); + if created_at > 0 { + return created_at; + } + if let Some(inferred) = infer_timestamp_ms_from_download_id(raw_id) { + return inferred; + } + let updated_at = safe_u64(raw_updated_at); + if updated_at > 0 { + updated_at + } else { + MIN_PLAUSIBLE_UNIX_MS + } +} + pub fn current_timestamp_ms() -> u64 { std::time::SystemTime::now() .duration_since(std::time::UNIX_EPOCH) diff --git a/src-tauri/src/adapters/driving/tauri_ipc.rs b/src-tauri/src/adapters/driving/tauri_ipc.rs index ebd9df5..d26c642 100644 --- a/src-tauri/src/adapters/driving/tauri_ipc.rs +++ b/src-tauri/src/adapters/driving/tauri_ipc.rs @@ -33,8 +33,9 @@ use crate::application::commands::{ use crate::application::error::AppError; use crate::application::queries::{ AccountFilter, CountDownloadsByStateQuery, GetAccountQuery, GetAccountTrafficQuery, - GetDownloadDetailQuery, GetDownloadsQuery, GetHistoryEntryQuery, GetPluginConfigQuery, - GetStatsQuery, ListAccountsQuery, ListHistoryQuery, ListPluginsQuery, SearchHistoryQuery, + GetDownloadDetailQuery, GetDownloadsQuery, GetHistoryEntryQuery, GetPackageQuery, + GetPluginConfigQuery, GetStatsQuery, ListAccountsQuery, ListHistoryQuery, + ListPackageDownloadsQuery, ListPackagesQuery, ListPluginsQuery, SearchHistoryQuery, TopModulesQuery, }; use crate::application::query_bus::QueryBus; @@ -42,6 +43,7 @@ use crate::application::read_models::account_view::{AccountTrafficDto, AccountVi use crate::application::read_models::download_detail_view::DownloadDetailViewDto; use crate::application::read_models::download_view::DownloadViewDto; use crate::application::read_models::history_view::HistoryViewDto; +use crate::application::read_models::package_view::PackageViewDto; use crate::application::read_models::plugin_config_view::PluginConfigView; use crate::application::read_models::plugin_store_view::PluginStoreEntryDto; use crate::application::read_models::plugin_view::PluginViewDto; @@ -52,8 +54,8 @@ use crate::domain::model::config::{AppConfig, ConfigPatch}; use crate::domain::model::download::{DownloadId, DownloadState}; use crate::domain::model::package::{PackageId, PackageSourceType}; use crate::domain::model::views::{ - DownloadFilter, HistoryFilter, HistorySort, HistorySortField, SortDirection, SortField, - SortOrder, StatsPeriod, + DownloadFilter, HistoryFilter, HistorySort, HistorySortField, PackageFilter, SortDirection, + SortField, SortOrder, StatsPeriod, }; use crate::domain::ports::driven::PluginLoader; @@ -3069,6 +3071,64 @@ pub async fn package_remove_download( .map_err(|e| e.to_string()) } +#[tauri::command] +pub async fn package_list( + state: State<'_, AppState>, + source_type: Option, + name_q: Option, +) -> Result, String> { + let source_type = source_type + .map(|s| s.trim().to_string()) + .filter(|s| !s.is_empty()); + if let Some(ref raw) = source_type { + // Validate eagerly so callers see "invalid source type" instead + // of an empty result set. + parse_package_source_type(raw)?; + } + let name_q = name_q + .map(|s| s.trim().to_string()) + .filter(|s| !s.is_empty()); + let filter = if source_type.is_none() && name_q.is_none() { + None + } else { + Some(PackageFilter { + source_type, + name_q, + }) + }; + state + .query_bus + .handle_list_packages(ListPackagesQuery { filter }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_get(state: State<'_, AppState>, id: String) -> Result { + state + .query_bus + .handle_get_package(GetPackageQuery { + id: PackageId::new(id), + }) + .await + .map_err(|e| e.to_string()) +} + +#[tauri::command] +pub async fn package_list_downloads( + state: State<'_, AppState>, + id: String, +) -> Result, String> { + state + .query_bus + .handle_list_package_downloads(ListPackageDownloadsQuery { + id: PackageId::new(id), + }) + .await + .map(|views| views.into_iter().map(DownloadViewDto::from).collect()) + .map_err(|e| e.to_string()) +} + #[cfg(test)] mod tests { use super::{ diff --git a/src-tauri/src/application/queries/get_package.rs b/src-tauri/src/application/queries/get_package.rs new file mode 100644 index 0000000..bf16fae --- /dev/null +++ b/src-tauri/src/application/queries/get_package.rs @@ -0,0 +1,79 @@ +//! Handler for [`GetPackageQuery`]. +//! +//! Returns a single package as a [`PackageViewDto`] or +//! [`AppError::NotFound`] when no row matches the requested id. + +use crate::application::error::AppError; +use crate::application::query_bus::QueryBus; +use crate::application::read_models::package_view::PackageViewDto; + +impl QueryBus { + pub async fn handle_get_package( + &self, + query: super::GetPackageQuery, + ) -> Result { + let repo = self + .package_read_repo() + .ok_or_else(|| AppError::Validation("package read repository not configured".into()))?; + let view = repo + .find_package_by_id(&query.id)? + .ok_or_else(|| AppError::NotFound(format!("package {}", query.id.as_str())))?; + Ok(view.into()) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use crate::application::error::AppError; + use crate::application::queries::GetPackageQuery; + use crate::application::test_support::{ + InMemoryPackageReadRepo, query_bus_with_packages, sample_package_view, + }; + use crate::domain::model::package::PackageId; + + #[tokio::test] + async fn test_get_package_returns_dto_when_found() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + repo.insert(sample_package_view("p-1", "One", "manual", 5)); + let bus = query_bus_with_packages(repo); + + let dto = bus + .handle_get_package(GetPackageQuery { + id: PackageId::new("p-1"), + }) + .await + .unwrap(); + assert_eq!(dto.id, "p-1"); + assert_eq!(dto.name, "One"); + assert_eq!(dto.source_type, "manual"); + } + + #[tokio::test] + async fn test_get_package_returns_not_found_when_missing() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + let bus = query_bus_with_packages(repo); + let err = bus + .handle_get_package(GetPackageQuery { + id: PackageId::new("ghost"), + }) + .await + .expect_err("ghost id"); + assert!(matches!(err, AppError::NotFound(msg) if msg.contains("ghost"))); + } + + #[tokio::test] + async fn test_get_package_returns_validation_error_when_repo_missing() { + let bus = crate::application::test_support::make_history_query_bus(Arc::new( + crate::application::test_support::NoopHistoryRepo, + )); + let err = bus + .handle_get_package(GetPackageQuery { + id: PackageId::new("p-1"), + }) + .await + .expect_err("missing repo"); + assert!(matches!(err, AppError::Validation(_))); + } +} diff --git a/src-tauri/src/application/queries/list_package_downloads.rs b/src-tauri/src/application/queries/list_package_downloads.rs new file mode 100644 index 0000000..6f252c6 --- /dev/null +++ b/src-tauri/src/application/queries/list_package_downloads.rs @@ -0,0 +1,101 @@ +//! Handler for [`ListPackageDownloadsQuery`]. +//! +//! Returns the [`DownloadView`] rows currently attached to the package, +//! ordered by `queue_position` ascending. Reuses the existing +//! `DownloadView` shape so the React layer can render member rows with +//! the same component as the main downloads list. + +use crate::application::error::AppError; +use crate::application::query_bus::QueryBus; +use crate::domain::model::views::DownloadView; + +impl QueryBus { + pub async fn handle_list_package_downloads( + &self, + query: super::ListPackageDownloadsQuery, + ) -> Result, AppError> { + let repo = self + .package_read_repo() + .ok_or_else(|| AppError::Validation("package read repository not configured".into()))?; + Ok(repo.find_package_downloads(&query.id)?) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use crate::application::error::AppError; + use crate::application::queries::ListPackageDownloadsQuery; + use crate::application::test_support::{ + InMemoryPackageReadRepo, query_bus_with_packages, sample_download_view, + }; + use crate::domain::model::download::DownloadId; + use crate::domain::model::package::PackageId; + + #[tokio::test] + async fn test_list_package_downloads_returns_views_in_order_repo_provides() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + repo.attach_downloads( + "pkg", + vec![ + sample_download_view(101, "first.zip", 1), + sample_download_view(102, "second.zip", 2), + ], + ); + let bus = query_bus_with_packages(repo); + + let result = bus + .handle_list_package_downloads(ListPackageDownloadsQuery { + id: PackageId::new("pkg"), + }) + .await + .unwrap(); + let ids: Vec = result.iter().map(|v| v.id.0).collect(); + assert_eq!(ids, vec![101, 102]); + } + + #[tokio::test] + async fn test_list_package_downloads_returns_empty_when_no_members() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + let bus = query_bus_with_packages(repo); + let result = bus + .handle_list_package_downloads(ListPackageDownloadsQuery { + id: PackageId::new("none"), + }) + .await + .unwrap(); + assert!(result.is_empty()); + } + + #[tokio::test] + async fn test_list_package_downloads_does_not_leak_other_packages_members() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + repo.attach_downloads("a", vec![sample_download_view(1, "a.zip", 0)]); + repo.attach_downloads("b", vec![sample_download_view(2, "b.zip", 0)]); + let bus = query_bus_with_packages(repo); + + let result = bus + .handle_list_package_downloads(ListPackageDownloadsQuery { + id: PackageId::new("a"), + }) + .await + .unwrap(); + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, DownloadId(1)); + } + + #[tokio::test] + async fn test_list_package_downloads_returns_validation_error_when_repo_missing() { + let bus = crate::application::test_support::make_history_query_bus(Arc::new( + crate::application::test_support::NoopHistoryRepo, + )); + let err = bus + .handle_list_package_downloads(ListPackageDownloadsQuery { + id: PackageId::new("pkg"), + }) + .await + .expect_err("missing repo"); + assert!(matches!(err, AppError::Validation(_))); + } +} diff --git a/src-tauri/src/application/queries/list_packages.rs b/src-tauri/src/application/queries/list_packages.rs new file mode 100644 index 0000000..2c40259 --- /dev/null +++ b/src-tauri/src/application/queries/list_packages.rs @@ -0,0 +1,105 @@ +//! Handler for [`ListPackagesQuery`]. +//! +//! Delegates to the [`PackageReadRepository`](crate::domain::ports::driven::PackageReadRepository) +//! which performs the `LEFT JOIN` aggregation in a single SQL round-trip. +//! Returned rows are sorted by `created_at` ascending then `id` +//! ascending so successive calls are deterministic. + +use crate::application::error::AppError; +use crate::application::query_bus::QueryBus; +use crate::application::read_models::package_view::PackageViewDto; + +impl QueryBus { + pub async fn handle_list_packages( + &self, + query: super::ListPackagesQuery, + ) -> Result, AppError> { + let repo = self + .package_read_repo() + .ok_or_else(|| AppError::Validation("package read repository not configured".into()))?; + let views = repo.find_packages(query.filter)?; + Ok(views.into_iter().map(PackageViewDto::from).collect()) + } +} + +#[cfg(test)] +mod tests { + use std::sync::Arc; + + use crate::application::error::AppError; + use crate::application::queries::ListPackagesQuery; + use crate::application::test_support::{ + InMemoryPackageReadRepo, query_bus_with_packages, sample_package_view, + }; + use crate::domain::model::views::PackageFilter; + + #[tokio::test] + async fn test_list_packages_returns_dtos_for_every_view() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + repo.insert(sample_package_view("a", "Apple", "manual", 1)); + repo.insert(sample_package_view("b", "Banana", "playlist", 2)); + let bus = query_bus_with_packages(repo); + + let result = bus + .handle_list_packages(ListPackagesQuery { filter: None }) + .await + .unwrap(); + let ids: Vec<&str> = result.iter().map(|p| p.id.as_str()).collect(); + assert_eq!(ids, vec!["a", "b"]); + assert_eq!(result[0].source_type, "manual"); + assert_eq!(result[1].source_type, "playlist"); + } + + #[tokio::test] + async fn test_list_packages_forwards_filter_to_repo() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + repo.insert(sample_package_view("a", "Holiday Mix", "playlist", 1)); + repo.insert(sample_package_view("b", "Misc", "manual", 2)); + let bus = query_bus_with_packages(repo); + + let result = bus + .handle_list_packages(ListPackagesQuery { + filter: Some(PackageFilter { + source_type: Some("playlist".to_string()), + name_q: None, + }), + }) + .await + .unwrap(); + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, "a"); + } + + #[tokio::test] + async fn test_list_packages_filter_name_q_is_case_insensitive_substring() { + let repo = Arc::new(InMemoryPackageReadRepo::new()); + repo.insert(sample_package_view("1", "Holiday Photos", "manual", 1)); + repo.insert(sample_package_view("2", "Music — Holidays", "manual", 2)); + repo.insert(sample_package_view("3", "Misc", "manual", 3)); + let bus = query_bus_with_packages(repo); + + let result = bus + .handle_list_packages(ListPackagesQuery { + filter: Some(PackageFilter { + source_type: None, + name_q: Some("HOLIDAY".to_string()), + }), + }) + .await + .unwrap(); + let ids: Vec<&str> = result.iter().map(|p| p.id.as_str()).collect(); + assert_eq!(ids, vec!["1", "2"]); + } + + #[tokio::test] + async fn test_list_packages_returns_validation_error_when_repo_missing() { + let bus = crate::application::test_support::make_history_query_bus(Arc::new( + crate::application::test_support::NoopHistoryRepo, + )); + let err = bus + .handle_list_packages(ListPackagesQuery { filter: None }) + .await + .expect_err("missing repo"); + assert!(matches!(err, AppError::Validation(msg) if msg.contains("package"))); + } +} diff --git a/src-tauri/src/application/queries/mod.rs b/src-tauri/src/application/queries/mod.rs index 29d54a0..9fb0854 100644 --- a/src-tauri/src/application/queries/mod.rs +++ b/src-tauri/src/application/queries/mod.rs @@ -9,20 +9,24 @@ mod get_account_traffic; mod get_download_detail; mod get_downloads; mod get_history_entry; +mod get_package; mod get_plugin_config; mod get_plugin_store; mod get_stats; mod list_accounts; mod list_archive_contents; mod list_history; +mod list_package_downloads; +mod list_packages; mod list_plugins; mod search_history; mod top_modules; use crate::domain::model::account::{AccountId, AccountType}; use crate::domain::model::download::DownloadId; +use crate::domain::model::package::PackageId; use crate::domain::model::views::{ - DownloadFilter, HistoryFilter, HistorySort, SortOrder, StatsPeriod, + DownloadFilter, HistoryFilter, HistorySort, PackageFilter, SortOrder, StatsPeriod, }; use crate::domain::ports::driving::Query; @@ -136,3 +140,27 @@ pub struct GetAccountTrafficQuery { pub id: AccountId, } impl Query for GetAccountTrafficQuery {} + +/// List packages with optional filtering. Results are ordered by +/// `created_at` ascending then by `id` ascending so successive calls +/// yield a deterministic order. +#[derive(Debug, Default)] +pub struct ListPackagesQuery { + pub filter: Option, +} +impl Query for ListPackagesQuery {} + +/// Fetch a single package's aggregated read view. +#[derive(Debug)] +pub struct GetPackageQuery { + pub id: PackageId, +} +impl Query for GetPackageQuery {} + +/// Fetch the downloads attached to a package, ordered by +/// `queue_position` ascending then `id` ascending. +#[derive(Debug)] +pub struct ListPackageDownloadsQuery { + pub id: PackageId, +} +impl Query for ListPackageDownloadsQuery {} diff --git a/src-tauri/src/application/query_bus.rs b/src-tauri/src/application/query_bus.rs index fde1cad..ac56a8e 100644 --- a/src-tauri/src/application/query_bus.rs +++ b/src-tauri/src/application/query_bus.rs @@ -7,7 +7,7 @@ use std::sync::Arc; use crate::domain::ports::driven::{ AccountRepository, ArchiveExtractor, DownloadReadRepository, HistoryRepository, - PluginConfigStore, PluginLoader, PluginReadRepository, StatsRepository, + PackageReadRepository, PluginConfigStore, PluginLoader, PluginReadRepository, StatsRepository, }; /// Central dispatcher for CQRS queries. @@ -23,6 +23,7 @@ pub struct QueryBus { plugin_loader: Option>, plugin_config_store: Option>, account_repo: Option>, + package_read_repo: Option>, } impl QueryBus { @@ -42,6 +43,7 @@ impl QueryBus { plugin_loader: None, plugin_config_store: None, account_repo: None, + package_read_repo: None, } } @@ -71,6 +73,17 @@ impl QueryBus { self.account_repo.as_deref() } + /// Builder-style setter for the package read repository. Optional so + /// fixtures that never query packages don't have to provide one. + pub fn with_package_read_repo(mut self, repo: Arc) -> Self { + self.package_read_repo = Some(repo); + self + } + + pub fn package_read_repo(&self) -> Option<&dyn PackageReadRepository> { + self.package_read_repo.as_deref() + } + pub fn download_read_repo(&self) -> &dyn DownloadReadRepository { self.download_read_repo.as_ref() } diff --git a/src-tauri/src/application/read_models/mod.rs b/src-tauri/src/application/read_models/mod.rs index 8f72811..43e5a5c 100644 --- a/src-tauri/src/application/read_models/mod.rs +++ b/src-tauri/src/application/read_models/mod.rs @@ -4,6 +4,7 @@ pub mod account_view; pub mod download_detail_view; pub mod download_view; pub mod history_view; +pub mod package_view; pub mod plugin_config_view; pub mod plugin_store_view; pub mod plugin_view; diff --git a/src-tauri/src/application/read_models/package_view.rs b/src-tauri/src/application/read_models/package_view.rs new file mode 100644 index 0000000..fa2747e --- /dev/null +++ b/src-tauri/src/application/read_models/package_view.rs @@ -0,0 +1,123 @@ +//! Serializable package view DTOs for the frontend. +//! +//! Mirrors [`PackageView`] field-by-field with `camelCase` JSON keys for +//! the React layer. The DTO never carries the keyring password — the +//! write-side `Package` aggregate keeps that in `password`, but the read +//! view intentionally omits it so query results never leak credential +//! references. + +use serde::Serialize; + +use crate::domain::model::views::PackageView; + +#[derive(Debug, Clone, PartialEq, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct PackageViewDto { + pub id: String, + pub name: String, + pub source_type: String, + pub folder_path: Option, + pub auto_extract: bool, + pub priority: u8, + pub created_at: u64, + pub downloads_count: u64, + pub total_bytes: u64, + pub downloaded_bytes: u64, + pub progress_percent: f64, + pub all_completed: bool, +} + +impl From for PackageViewDto { + fn from(view: PackageView) -> Self { + Self { + id: view.id, + name: view.name, + source_type: view.source_type, + folder_path: view.folder_path, + auto_extract: view.auto_extract, + priority: view.priority, + created_at: view.created_at, + downloads_count: view.downloads_count, + total_bytes: view.total_bytes, + downloaded_bytes: view.downloaded_bytes, + progress_percent: view.progress_percent, + all_completed: view.all_completed, + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + + fn make_view() -> PackageView { + PackageView { + id: "pkg-7".to_string(), + name: "Holiday".to_string(), + source_type: "playlist".to_string(), + folder_path: Some("/srv/dl".to_string()), + auto_extract: false, + priority: 8, + created_at: 1_700_000_000_000, + downloads_count: 4, + total_bytes: 12_000, + downloaded_bytes: 6_000, + progress_percent: 50.0, + all_completed: false, + } + } + + #[test] + fn test_package_view_dto_from_view_copies_every_field() { + let dto: PackageViewDto = make_view().into(); + assert_eq!(dto.id, "pkg-7"); + assert_eq!(dto.name, "Holiday"); + assert_eq!(dto.source_type, "playlist"); + assert_eq!(dto.folder_path.as_deref(), Some("/srv/dl")); + assert!(!dto.auto_extract); + assert_eq!(dto.priority, 8); + assert_eq!(dto.created_at, 1_700_000_000_000); + assert_eq!(dto.downloads_count, 4); + assert_eq!(dto.total_bytes, 12_000); + assert_eq!(dto.downloaded_bytes, 6_000); + assert!((dto.progress_percent - 50.0).abs() < 1e-9); + assert!(!dto.all_completed); + } + + #[test] + fn test_package_view_dto_serializes_to_camel_case() { + let dto: PackageViewDto = make_view().into(); + let value = serde_json::to_value(&dto).unwrap(); + let object = value.as_object().expect("object"); + for camel_field in [ + "id", + "name", + "sourceType", + "folderPath", + "autoExtract", + "priority", + "createdAt", + "downloadsCount", + "totalBytes", + "downloadedBytes", + "progressPercent", + "allCompleted", + ] { + assert!( + object.contains_key(camel_field), + "camelCase field `{camel_field}` missing" + ); + } + } + + #[test] + fn test_package_view_dto_omits_password_field() { + let dto: PackageViewDto = make_view().into(); + let value = serde_json::to_value(&dto).unwrap(); + let object = value.as_object().expect("object"); + assert!( + !object.contains_key("password"), + "PackageViewDto must never expose a password field" + ); + } +} diff --git a/src-tauri/src/application/test_support.rs b/src-tauri/src/application/test_support.rs index e99ef81..af69e7e 100644 --- a/src-tauri/src/application/test_support.rs +++ b/src-tauri/src/application/test_support.rs @@ -20,10 +20,12 @@ use crate::domain::model::credential::Credential; use crate::domain::model::download::{Download, DownloadId, DownloadState}; use crate::domain::model::http::HttpResponse; use crate::domain::model::meta::DownloadMeta; +use crate::domain::model::package::PackageId; use crate::domain::model::plugin::{PluginInfo, PluginManifest}; use crate::domain::model::views::{ DownloadDetailView, DownloadFilter, DownloadView, HistoryEntry, HistoryFilter, HistorySort, - HistorySortField, SortDirection, SortOrder, StateCountMap, StatsView, + HistorySortField, PackageFilter, PackageView, SortDirection, SortOrder, StateCountMap, + StatsView, }; use crate::domain::ports::driven::history_repository::{ MAX_HISTORY_PAGE_SIZE, MAX_HISTORY_SEARCH_RESULTS, @@ -31,7 +33,8 @@ use crate::domain::ports::driven::history_repository::{ use crate::domain::ports::driven::{ AccountRepository, ArchiveExtractor, ClipboardObserver, ConfigStore, CredentialStore, DownloadEngine, DownloadReadRepository, DownloadRepository, EventBus, FileStorage, - HistoryRepository, HttpClient, PluginLoader, PluginReadRepository, StatsRepository, + HistoryRepository, HttpClient, PackageReadRepository, PluginLoader, PluginReadRepository, + StatsRepository, }; fn host_component(url: &str) -> Option<&str> { @@ -644,3 +647,142 @@ pub(crate) fn query_bus_with_accounts(repo: Arc) -> Query ) .with_account_repo(repo) } + +/// In-memory `PackageReadRepository` used by query-handler tests. Filter +/// semantics mirror the SQLite adapter so behavioural assertions written +/// against this fake also catch regressions in the real implementation. +pub(crate) struct InMemoryPackageReadRepo { + packages: Mutex>, + downloads_by_package: Mutex>>, +} + +impl InMemoryPackageReadRepo { + pub(crate) fn new() -> Self { + Self { + packages: Mutex::new(Vec::new()), + downloads_by_package: Mutex::new(HashMap::new()), + } + } + + pub(crate) fn insert(&self, view: PackageView) { + self.packages.lock().unwrap().push(view); + } + + pub(crate) fn attach_downloads(&self, package_id: &str, downloads: Vec) { + self.downloads_by_package + .lock() + .unwrap() + .insert(package_id.to_string(), downloads); + } +} + +impl PackageReadRepository for InMemoryPackageReadRepo { + fn find_packages( + &self, + filter: Option, + ) -> Result, DomainError> { + let mut snapshot: Vec = self.packages.lock().unwrap().clone(); + snapshot.sort_by(|a, b| { + a.created_at + .cmp(&b.created_at) + .then_with(|| a.id.cmp(&b.id)) + }); + if let Some(f) = filter { + if let Some(source) = f.source_type { + snapshot.retain(|p| p.source_type == source); + } + if let Some(needle) = f.name_q { + let trimmed = needle.trim().to_lowercase(); + if !trimmed.is_empty() { + snapshot.retain(|p| p.name.to_lowercase().contains(&trimmed)); + } + } + } + Ok(snapshot) + } + + fn find_package_by_id(&self, id: &PackageId) -> Result, DomainError> { + Ok(self + .packages + .lock() + .unwrap() + .iter() + .find(|p| p.id == id.as_str()) + .cloned()) + } + + fn find_package_downloads(&self, id: &PackageId) -> Result, DomainError> { + Ok(self + .downloads_by_package + .lock() + .unwrap() + .get(id.as_str()) + .cloned() + .unwrap_or_default()) + } +} + +/// Build a [`QueryBus`] wired with the given package read repository. +/// +/// Other read ports return empty/default data — suitable for tests that +/// only exercise package queries. +pub(crate) fn query_bus_with_packages(repo: Arc) -> QueryBus { + QueryBus::new( + Arc::new(StubDownloadReadRepo), + Arc::new(NoopHistoryRepo), + Arc::new(StubStatsRepo), + Arc::new(StubPluginReadRepo), + Arc::new(StubQueryArchiveExtractor), + ) + .with_package_read_repo(repo) +} + +/// Convenience builder for tests that only care about identity, name, +/// source type, and ordering. All numeric stats default to zero. +pub(crate) fn sample_package_view( + id: &str, + name: &str, + source_type: &str, + created_at: u64, +) -> PackageView { + PackageView { + id: id.to_string(), + name: name.to_string(), + source_type: source_type.to_string(), + folder_path: None, + auto_extract: true, + priority: 5, + created_at, + downloads_count: 0, + total_bytes: 0, + downloaded_bytes: 0, + progress_percent: 0.0, + all_completed: false, + } +} + +/// Convenience builder for download view fixtures used by the package +/// queries tests. Sets only the discriminating fields (id, file_name, +/// queue_position) and defaults everything else. +pub(crate) fn sample_download_view(id: u64, file_name: &str, queue_position: i64) -> DownloadView { + DownloadView { + id: DownloadId(id), + file_name: file_name.to_string(), + url: format!("https://example.com/{file_name}"), + source_hostname: "example.com".to_string(), + state: DownloadState::Queued, + progress_percent: 0.0, + speed_bytes_per_sec: 0, + downloaded_bytes: 0, + total_bytes: None, + eta_seconds: None, + segments_active: 0, + segments_total: 0, + module_name: None, + account_name: None, + error_message: None, + priority: 5, + queue_position, + created_at: 0, + } +} diff --git a/src-tauri/src/domain/model/mod.rs b/src-tauri/src/domain/model/mod.rs index 2641fda..c639db3 100644 --- a/src-tauri/src/domain/model/mod.rs +++ b/src-tauri/src/domain/model/mod.rs @@ -31,5 +31,6 @@ pub use queue::{Priority, QueuePosition}; pub use segment::{Segment, SegmentState}; pub use views::{ DailyVolume, DownloadDetailView, DownloadFilter, DownloadView, HistoryEntry, HostStats, - SegmentView, SortDirection, SortField, SortOrder, StateCountMap, StatsView, + PackageFilter, PackageView, SegmentView, SortDirection, SortField, SortOrder, StateCountMap, + StatsView, }; diff --git a/src-tauri/src/domain/model/views.rs b/src-tauri/src/domain/model/views.rs index b4490f5..920880d 100644 --- a/src-tauri/src/domain/model/views.rs +++ b/src-tauri/src/domain/model/views.rs @@ -220,3 +220,64 @@ pub struct SortOrder { /// Count of downloads grouped by state. pub type StateCountMap = HashMap; + +/// Aggregated read view of a `Package` aggregate. +/// +/// Produced by [`PackageReadRepository`](crate::domain::ports::driven::PackageReadRepository) +/// from a single `LEFT JOIN` between `packages` and `downloads` so the +/// child statistics (`downloads_count`, `total_bytes`, `progress_percent`) +/// are computed SQL-side. Avoids the N+1 round-trip the UI would otherwise +/// pay when listing dozens of packages. +#[derive(Debug, Clone, PartialEq)] +pub struct PackageView { + pub id: String, + pub name: String, + /// Lowercase wire form (`container`, `playlist`, `manual`, `split_archive`). + pub source_type: String, + pub folder_path: Option, + pub auto_extract: bool, + pub priority: u8, + pub created_at: u64, + /// Number of downloads currently attached via `downloads.package_id`. + pub downloads_count: u64, + /// Aggregate of member `downloads.total_bytes`. Members in state + /// `Completed` count for `COALESCE(total_bytes, downloaded_bytes)` so + /// the value matches what each row's per-download view reports + /// (Completed = 100% regardless of the persisted bytes); other + /// members with `total_bytes = NULL` contribute `0`. `0` overall + /// when the package has no members. Mirrored on the numerator side + /// in `downloaded_bytes` so `progress_percent` cannot exceed 100%. + pub total_bytes: u64, + /// Aggregate of member `downloads.downloaded_bytes`. Members in + /// state `Completed` count for `COALESCE(total_bytes, + /// downloaded_bytes)` (their full size when known, otherwise their + /// persisted bytes), other members count for the persisted + /// `downloaded_bytes`. `0` when the package has no members. + pub downloaded_bytes: u64, + /// Aggregate progress in `[0.0, 100.0]`, rounded to one decimal. `0.0` + /// when no member contributes a known total. Mirrors the formula + /// applied to individual downloads in `download_read_repo` so the UI + /// stays consistent across rows. + pub progress_percent: f64, + /// `true` when at least one member download has a `Completed` state + /// **and** every other member is also `Completed`. `false` when the + /// package is empty or any member is still pending/failed/active. + pub all_completed: bool, +} + +/// Filter combinable on the `find_packages` read repository call. +/// +/// Each field is optional. When `name_q` is set the implementation +/// performs a Unicode-aware case-insensitive substring (fuzzy) match +/// against `packages.name` — the comparison happens after the SQL fetch +/// so `LIKE` wildcards (`%`, `_`) are treated literally and non-ASCII +/// characters case-fold correctly (e.g. `café` matches `CAFÉ`). Blank +/// or whitespace-only values are treated as "no filter". When +/// `source_type` is set it constrains by the lowercase wire form +/// (`container`, `playlist`, `manual`, `split_archive`). Both fields +/// combine with AND when present. +#[derive(Debug, Clone, Default, PartialEq, Eq)] +pub struct PackageFilter { + pub source_type: Option, + pub name_q: Option, +} diff --git a/src-tauri/src/domain/ports/driven/mod.rs b/src-tauri/src/domain/ports/driven/mod.rs index ae8a0fd..8a2e0a1 100644 --- a/src-tauri/src/domain/ports/driven/mod.rs +++ b/src-tauri/src/domain/ports/driven/mod.rs @@ -18,6 +18,7 @@ pub mod file_opener; pub mod file_storage; pub mod history_repository; pub mod http_client; +pub mod package_read_repository; pub mod package_repository; pub mod passphrase_codec; pub mod plugin_config_store; @@ -44,6 +45,7 @@ pub use file_opener::FileOpener; pub use file_storage::FileStorage; pub use history_repository::HistoryRepository; pub use http_client::HttpClient; +pub use package_read_repository::PackageReadRepository; pub use package_repository::PackageRepository; pub use passphrase_codec::PassphraseCodec; pub use plugin_config_store::PluginConfigStore; diff --git a/src-tauri/src/domain/ports/driven/package_read_repository.rs b/src-tauri/src/domain/ports/driven/package_read_repository.rs new file mode 100644 index 0000000..2ddba73 --- /dev/null +++ b/src-tauri/src/domain/ports/driven/package_read_repository.rs @@ -0,0 +1,38 @@ +//! Read repository for the `Package` aggregate (CQRS read side). +//! +//! Returns flattened, display-ready DTOs produced by SQL aggregations +//! (`COUNT`, `SUM`, `AVG` on the `downloads.package_id` foreign key) so +//! the UI does not have to fetch every member download to render package +//! statistics. Never exposes mutation methods — the write port lives in +//! [`crate::domain::ports::driven::PackageRepository`]. + +use crate::domain::error::DomainError; +use crate::domain::model::package::PackageId; +use crate::domain::model::views::{DownloadView, PackageFilter, PackageView}; + +/// Reads package data as pre-computed views for the UI. +/// +/// **CQRS invariant:** this trait intentionally has no `save()` method. +pub trait PackageReadRepository: Send + Sync { + /// List packages matching the optional filter, ordered by + /// `created_at` ascending then `id` ascending so the result is + /// deterministic across calls. + /// + /// `filter.name_q` is a case-insensitive substring match against the + /// stored `packages.name`. `filter.source_type` is an exact match + /// against the lowercase wire form. Multiple fields AND together. + fn find_packages(&self, filter: Option) + -> Result, DomainError>; + + /// Fetch the aggregated view for a single package. + /// + /// Returns `Ok(None)` when no row matches — error variants are + /// reserved for storage / data-shape problems. + fn find_package_by_id(&self, id: &PackageId) -> Result, DomainError>; + + /// Fetch every download currently attached to the given package as + /// `DownloadView` rows, ordered by `queue_position` ascending then + /// `id` ascending. Returns an empty vector when the package has no + /// members or does not exist. + fn find_package_downloads(&self, id: &PackageId) -> Result, DomainError>; +} diff --git a/src-tauri/src/lib.rs b/src-tauri/src/lib.rs index 9c36ab8..e35443d 100644 --- a/src-tauri/src/lib.rs +++ b/src-tauri/src/lib.rs @@ -80,12 +80,13 @@ pub use adapters::driving::tauri_ipc::{ download_set_priority, download_start, download_verify_checksum, history_clear, history_delete_entry, history_export, history_get_by_id, history_list, history_purge_older_than, history_search, link_resolve, package_add_download, package_create, - package_delete, package_move_to_folder, package_remove_download, package_set_password, - package_set_priority, package_toggle_auto_extract, package_update, plugin_config_get, - plugin_config_update, plugin_disable, plugin_enable, plugin_install, plugin_list, - plugin_report_broken, plugin_store_install, plugin_store_list, plugin_store_refresh, - plugin_store_update, plugin_uninstall, reveal_in_folder, settings_get, settings_update, - stats_get, stats_top_modules, status_bar_get, + package_delete, package_get, package_list, package_list_downloads, package_move_to_folder, + package_remove_download, package_set_password, package_set_priority, + package_toggle_auto_extract, package_update, plugin_config_get, plugin_config_update, + plugin_disable, plugin_enable, plugin_install, plugin_list, plugin_report_broken, + plugin_store_install, plugin_store_list, plugin_store_refresh, plugin_store_update, + plugin_uninstall, reveal_in_folder, settings_get, settings_update, stats_get, + stats_top_modules, status_bar_get, }; #[cfg_attr(mobile, tauri::mobile_entry_point)] @@ -166,6 +167,13 @@ pub fn run() { Arc::new(SqliteAccountRepo::new(db.clone())); let package_repo: Arc = Arc::new(SqlitePackageRepo::new(db.clone())); + let package_read_repo: Arc< + dyn crate::domain::ports::driven::PackageReadRepository, + > = Arc::new( + crate::adapters::driven::sqlite::package_read_repo::SqlitePackageReadRepo::new( + db.clone(), + ), + ); // ── Plugin system ─────────────────────────────────────── let shared_resources = Arc::new(SharedHostResources::new()); @@ -391,7 +399,8 @@ pub fn run() { ) .with_plugin_loader(plugin_loader.clone()) .with_plugin_config_store(plugin_config_store) - .with_account_repo(account_repo), + .with_account_repo(account_repo) + .with_package_read_repo(package_read_repo), ); // ── Register AppState ─────────────────────────────────── @@ -581,6 +590,9 @@ pub fn run() { package_toggle_auto_extract, package_add_download, package_remove_download, + package_list, + package_get, + package_list_downloads, ]) .run(tauri::generate_context!()) // Tauri's run() has no meaningful recovery path — panic is intentional here