Skip to content

system-reinstall-bootc: add progress indication during install steps#2091

Open
judavi wants to merge 1 commit intobootc-dev:mainfrom
judavi:1270
Open

system-reinstall-bootc: add progress indication during install steps#2091
judavi wants to merge 1 commit intobootc-dev:mainfrom
judavi:1270

Conversation

@judavi
Copy link

@judavi judavi commented Mar 24, 2026

Add progress indication to the long-running steps of system-reinstall-bootc to address #1270.

This also fixes a latent bug raised in review: bootc_has_clean was calling podman run <image> which could implicitly pull the image if it wasn't already cached locally, with no clear indication to the user.

The flow is now three clearly distinct phases:

  1. Pull phase: pull_if_not_present explicitly pulls the image first, with a message so the user knows what is happening. podman pull streams its own progress output.
  2. Capability check phase: bootc_has_clean is called after the image is guaranteed to be local, with a spinner labeled "Checking image capabilities...".
  3. Install phase : reinstall_command is now a pure command builder that takes has_clean: bool as a parameter and has no I/O side effects
image

@github-actions github-actions bot added the area/system-reinstall-bootc Issues related to system-reinstall-botoc label Mar 24, 2026
@bootc-bot bootc-bot bot requested a review from jmarrero March 24, 2026 10:59
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the user experience by adding progress indicators for long-running operations. A spinner is now shown during the bootc_has_clean check, and a message is printed before starting the installation. The changes are well-implemented. I have one suggestion to improve robustness by replacing unwrap() with expect() to avoid potential panics.

@judavi judavi force-pushed the 1270 branch 2 times, most recently from beb4b07 to 227d2d0 Compare March 24, 2026 11:26
Copy link
Collaborator

@cgwalters cgwalters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay in review and thank you so much for your contribution!

oci-spec = "0.9.0"
rand = "0.10"
rexpect = "0.7"
rexpect = "0.6"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this looks like a likely unintentional change?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's correct. Thank you! I'm not that used to use Codespaces for development


prompt::temporary_developer_protection_prompt()?;

println!("Starting bootc installation. This may take several minutes...");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part makes sense

);
spinner.set_message("Checking image capabilities...");
spinner.enable_steady_tick(Duration::from_millis(150));
let has_clean = bootc_has_clean(&opts.image)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm wait wait...did we end up pulling the image here? I think we did...if so that seems like the real bug. I think we should break out a clearly distinct pull phase.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right bootc_has_clean was calling podman run <image> which would have implicitly pulled the image if it wasn't already cached locally. Fixed in the latest push: bootc_has_clean is now pub(crate) and called explicitly from run() in main.rs, after pull_if_not_present completes. reinstall_command is now a pure command builder that takes has_clean: bool as a parameter with no I/O side effects. The flow is now three clearly distinct phases: pull → capability check (spinner) → install

Copy link
Contributor

@jmarrero jmarrero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is passing all the tests, the main issue here for me is just squashing the commits into one. Thank you so much for fixing this issue.

… bug

Add a spinner during the capability check step and a status message
before the long-running installation, addressing issue bootc-dev#1270.

Also fix a latent bug: `bootc_has_clean` was called inside
`reinstall_command`, which runs `podman run <image>` and would
implicitly pull the image if not already cached locally. The flow is
now three clearly distinct phases:

1. Pull phase: `pull_if_not_present` explicitly pulls the image first,
   with a message so the user knows what is happening.
2. Capability check phase: `bootc_has_clean` is called after the image
   is guaranteed to be local, with a spinner labeled
   "Checking image capabilities...".
3. Install phase: `reinstall_command` is now a pure command builder
   that takes `has_clean: bool` as a parameter with no I/O side effects.

Fixes: bootc-dev#1270
Signed-off-by: Juan Gomez <1766933+judavi@users.noreply.github.com>
@judavi
Copy link
Author

judavi commented Mar 26, 2026

Thanks @jmarrero ! Commits are squashed in one now.
Btw It was a nice experience meeting the team during the Kubecon :)

@judavi judavi requested a review from jmarrero March 26, 2026 08:11
Copy link
Collaborator

@cgwalters cgwalters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@cgwalters cgwalters enabled auto-merge (rebase) March 26, 2026 08:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/system-reinstall-bootc Issues related to system-reinstall-botoc

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants