Skip to content

Fix save_megatron_model deadlock: pass fully_parallel_save=False during conversion#2226

Open
nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
nic-nvidia:fix/save-megatron-model-deadlock
Open

Fix save_megatron_model deadlock: pass fully_parallel_save=False during conversion#2226
nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
nic-nvidia:fix/save-megatron-model-deadlock

Conversation

@nic-nvidia
Copy link
Copy Markdown

@nic-nvidia nic-nvidia commented Apr 7, 2026

Summary

Pass fully_parallel_save=False to bridge.save_megatron_model() during HF-to-Megatron checkpoint conversion.

Fixes #2225

Depends on: NVIDIA-NeMo/Megatron-Bridge#3207 (exposes fully_parallel_save parameter in AutoBridge.save_megatron_model())

Problem

save_megatron_model deadlocks during the one-time HF-to-Megatron weight conversion when using the Megatron backend with non-colocated vLLM generation. CheckpointConfig.fully_parallel_save defaults to True, activating FullyParallelSaveStrategyWrapper which calls all_gather_object on DP sub-groups that include vLLM ranks. Since vLLM workers never enter save_megatron_model, these collectives hang permanently.

Fix

Pass fully_parallel_save=False when calling bridge.save_megatron_model() in community_import.py. Uses inspect.signature for backward compatibility with Megatron-Bridge versions that don't yet expose the parameter.

fully_parallel_save is a performance optimization for repeated training checkpoint saves. It is unnecessary for the one-time conversion and introduces collective overhead that deadlocks in mixed training/inference worlds.

Test plan

  • Validated on 3x8 DGX B200 cluster with Nemotron 120B non-colocated: 242GB checkpoint saved successfully (previously deadlocked indefinitely)
  • Verify no regression on colocated Megatron recipes
  • Verify DTensor backend unaffected

@nic-nvidia nic-nvidia requested a review from a team as a code owner April 7, 2026 17:22
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 7, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Pass fully_parallel_save=False to bridge.save_megatron_model() during
HF-to-Megatron conversion. FullyParallelSaveStrategyWrapper calls
all_gather_object on DP sub-groups that include vLLM ranks which never
enter save, causing a permanent deadlock.

Uses inspect.signature for backward compat with Megatron-Bridge versions
that do not yet expose the parameter (see NVIDIA-NeMo/Megatron-Bridge PR).
@nic-nvidia nic-nvidia force-pushed the fix/save-megatron-model-deadlock branch from ec2ba9b to 8c61e94 Compare April 8, 2026 06:08
nic-nvidia added a commit to nic-nvidia/Megatron-Bridge that referenced this pull request Apr 8, 2026
Add fully_parallel_save parameter to AutoBridge.save_megatron_model()
and model_load_save.save_megatron_model(), forwarded to CheckpointConfig.

Defaults to True (no behavior change). Callers can pass False to disable
FullyParallelSaveStrategyWrapper, which deadlocks when the distributed
world includes ranks that do not participate in the save (e.g., vLLM
inference workers in NeMo RL non-colocated setups).

Needed by NVIDIA-NeMo/RL#2226.
@nic-nvidia
Copy link
Copy Markdown
Author

Validation update (2026-04-08)

Built a clean Docker image from this PR + #2230 + Megatron-Bridge PR NVIDIA-NeMo/Megatron-Bridge#3207, deployed on a 3×8 DGX B200 cluster (Nemotron Super 120B, non-colocated vLLM).

Key evidence the fix works:

  • Cleared the cached Megatron conversion to force the community_import.pysave_megatron_model code path
  • Logs show: saving checkpoint at iteration 0 to .../model__shared_models_nemotron-super-120b-bf16 in torch_dist format
  • Pre-fix: save_megatron_model deadlocked indefinitely at all_gather_object — checkpoint directory was never created
  • Post-fix: save_megatron_model was called, checkpoint directory created on NFS, distributed save began writing

The fully_parallel_save=False parameter (via inspect.signature backward compat) successfully bypasses the FullyParallelSaveStrategyWrapper that caused the deadlock.

The run subsequently hit an unrelated NCCL timeout during the distributed checkpoint write (network flake on our cluster), but the deadlock itself is resolved — the save path is no longer blocked.

The all_gather_object in determine_global_metadata deadlocks when
ranks take asymmetric time to build state dicts (e.g., expert parallel
ranks with different shard counts). Validation is unnecessary for the
one-time HF-to-Megatron conversion.
nic-nvidia added a commit to nic-nvidia/Megatron-Bridge that referenced this pull request Apr 8, 2026
…tron_model

Based on container image commit dd9729f (v0.5.0.nemotron_3_super).

Add fully_parallel_save and validate_access_integrity parameters to
AutoBridge.save_megatron_model() and model_load_save.save_megatron_model().

Needed by NVIDIA-NeMo/RL#2226.
@chtruong814 chtruong814 added the needs-follow-up Issue needs follow-up label Apr 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

save_megatron_model deadlocks during HF-to-Megatron checkpoint conversion (fully_parallel_save)

3 participants