Fix save_megatron_model deadlock: pass fully_parallel_save=False during conversion#2226
Open
nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
Open
Fix save_megatron_model deadlock: pass fully_parallel_save=False during conversion#2226nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
Conversation
Pass fully_parallel_save=False to bridge.save_megatron_model() during HF-to-Megatron conversion. FullyParallelSaveStrategyWrapper calls all_gather_object on DP sub-groups that include vLLM ranks which never enter save, causing a permanent deadlock. Uses inspect.signature for backward compat with Megatron-Bridge versions that do not yet expose the parameter (see NVIDIA-NeMo/Megatron-Bridge PR).
ec2ba9b to
8c61e94
Compare
nic-nvidia
added a commit
to nic-nvidia/Megatron-Bridge
that referenced
this pull request
Apr 8, 2026
Add fully_parallel_save parameter to AutoBridge.save_megatron_model() and model_load_save.save_megatron_model(), forwarded to CheckpointConfig. Defaults to True (no behavior change). Callers can pass False to disable FullyParallelSaveStrategyWrapper, which deadlocks when the distributed world includes ranks that do not participate in the save (e.g., vLLM inference workers in NeMo RL non-colocated setups). Needed by NVIDIA-NeMo/RL#2226.
2 tasks
Author
Validation update (2026-04-08)Built a clean Docker image from this PR + #2230 + Megatron-Bridge PR NVIDIA-NeMo/Megatron-Bridge#3207, deployed on a 3×8 DGX B200 cluster (Nemotron Super 120B, non-colocated vLLM). Key evidence the fix works:
The The run subsequently hit an unrelated NCCL timeout during the distributed checkpoint write (network flake on our cluster), but the deadlock itself is resolved — the save path is no longer blocked. |
The all_gather_object in determine_global_metadata deadlocks when ranks take asymmetric time to build state dicts (e.g., expert parallel ranks with different shard counts). Validation is unnecessary for the one-time HF-to-Megatron conversion.
nic-nvidia
added a commit
to nic-nvidia/Megatron-Bridge
that referenced
this pull request
Apr 8, 2026
…tron_model Based on container image commit dd9729f (v0.5.0.nemotron_3_super). Add fully_parallel_save and validate_access_integrity parameters to AutoBridge.save_megatron_model() and model_load_save.save_megatron_model(). Needed by NVIDIA-NeMo/RL#2226.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Pass
fully_parallel_save=Falsetobridge.save_megatron_model()during HF-to-Megatron checkpoint conversion.Fixes #2225
Depends on: NVIDIA-NeMo/Megatron-Bridge#3207 (exposes
fully_parallel_saveparameter inAutoBridge.save_megatron_model())Problem
save_megatron_modeldeadlocks during the one-time HF-to-Megatron weight conversion when using the Megatron backend with non-colocated vLLM generation.CheckpointConfig.fully_parallel_savedefaults toTrue, activatingFullyParallelSaveStrategyWrapperwhich callsall_gather_objecton DP sub-groups that include vLLM ranks. Since vLLM workers never entersave_megatron_model, these collectives hang permanently.Fix
Pass
fully_parallel_save=Falsewhen callingbridge.save_megatron_model()incommunity_import.py. Usesinspect.signaturefor backward compatibility with Megatron-Bridge versions that don't yet expose the parameter.fully_parallel_saveis a performance optimization for repeated training checkpoint saves. It is unnecessary for the one-time conversion and introduces collective overhead that deadlocks in mixed training/inference worlds.Test plan