Expose fully_parallel_save in save_megatron_model#3207
Open
nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
Open
Expose fully_parallel_save in save_megatron_model#3207nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
Conversation
Add fully_parallel_save parameter to AutoBridge.save_megatron_model() and model_load_save.save_megatron_model(), forwarded to CheckpointConfig. Defaults to True (no behavior change). Callers can pass False to disable FullyParallelSaveStrategyWrapper, which deadlocks when the distributed world includes ranks that do not participate in the save (e.g., vLLM inference workers in NeMo RL non-colocated setups). Needed by NVIDIA-NeMo/RL#2226.
3 tasks
Contributor
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughA new optional parameter Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
…_megatron_model The all_gather_object in determine_global_metadata (validation.py:518) uses the default PG. When some ranks take longer to build their state dict (e.g., due to expert parallelism), the collective times out. For one-time conversion saves, validation is unnecessary and can be safely skipped. Also adds distributed_timeout_minutes for callers that need longer timeouts during large model saves.
Contributor
|
/ok to test 664a8a8 |
yaoyu-33
approved these changes
Apr 8, 2026
Contributor
|
@nic-nvidia plz fix test fail |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add
fully_parallel_saveparameter toAutoBridge.save_megatron_model()and the underlyingmodel_load_save.save_megatron_model(), forwarded toCheckpointConfig.Defaults to
True— no behavior change for existing callers.Problem
save_megatron_model()always creates aCheckpointConfigwithfully_parallel_save=True(the dataclass default). This activatesFullyParallelSaveStrategyWrapper, which callsall_gather_objecton DP sub-groups. When the distributed world includes ranks that never enter the save path (e.g., vLLM inference workers in NeMo RL non-colocated setups), these collectives deadlock permanently.Callers have no way to disable this behavior through the public API.
Fix
Thread
fully_parallel_save: bool = Truethrough both layers:AutoBridge.save_megatron_model()→model_load_save.save_megatron_model()→CheckpointConfig(..., fully_parallel_save=...)Motivation
Needed by NVIDIA-NeMo/RL#2226 which fixes a deadlock during HF-to-Megatron checkpoint conversion in non-colocated training/inference setups.
Test plan
fully_parallel_save=True)Summary by CodeRabbit