Skip to content

Refactor cli#110

Open
isaacaka wants to merge 3 commits intomainfrom
refactor_cli
Open

Refactor cli#110
isaacaka wants to merge 3 commits intomainfrom
refactor_cli

Conversation

@isaacaka
Copy link
Copy Markdown
Collaborator

@isaacaka isaacaka commented Mar 24, 2026

Change Description

Refactors cli.py to make use of pipeline_utils.py

Task List

  • Add TermDef base class
  • Create run_pipeline function which makes use of functions from pipeline_utils.py
  • Make save/load paths consistent between notebook and cli
  • And an integration test which uses run_pipeline

Closes #104, #105

@review-notebook-app
Copy link
Copy Markdown

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@isaacaka isaacaka requested review from SimonSadler and ma595 March 24, 2026 18:35
Introduce TermDef (key, term, filename) as a base dataclass and make
TermSpec inherit from it, adding the notebook-specific mean_axes and
err_axes fields. Update all pipeline_utils function signatures to accept
Sequence[TermDef] since none of them use the axes fields directly.

Add pipeline.py with run_pipeline(), which wires together the existing
pipeline_utils building blocks into the full prepare→save→forecast→
reconstruct→save sequence. This gives the CLI and tests a single
importable entry point for the end-to-end pipeline.
Replace emulate() and jump() in cli.py with a call to run_pipeline(),
passing data_path=args.path and out_dir separately so the raw data
root and the run output directory are no longer conflated.

Fix a path mismatch between Simulation.save() and load_ts_all():
save() writes to {prepared_path}/{term}/{term}.npz (nested), but
load_ts_all was passing prepared_path directly to load_ts, which
looked for {prepared_path}/{term}.npz (flat). Update load_ts_all to
pass the term subdirectory, and update the Jumper notebook save cell
to match the same nested convention.
A test which runs the full pipeline end-to-end using the existing test
data. Asserts that output .npy files are written for each term, each
array has exactly `steps` time steps, and no output is entirely NaN.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Full pipeline integration test

1 participant