NNX migration prep (3/N): TrainState, model creation, and end-to-end training loop#3500
Draft
ecnal-cienet wants to merge 3 commits intomainfrom
Draft
NNX migration prep (3/N): TrainState, model creation, and end-to-end training loop#3500ecnal-cienet wants to merge 3 commits intomainfrom
ecnal-cienet wants to merge 3 commits intomainfrom
Conversation
4bae533 to
e6baabd
Compare
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
754df44 to
8055cc8
Compare
8055cc8 to
a906f15
Compare
- pure_nnx: a flag to to choose pure NNX logic when NNX and linen models co-exist. - init_state_fn: a function to initialize the model state for the training. It will be set to different function for NNX and Linen.
- Add utils to manipulate the NNX shardings with abstract state of a
model
- also add unit tests for the utils
- Extract mesh creation function to maxtext_utils.get_mesh_from_config()
- also add unit tests for this func
Note:
flax v0.12 has DeprecationWarning in multiple places:
- DeprecationWarning: '.value' access is now deprecated. Use
variable.get_value() or variable[...] (for [Array]).
- DeprecationWarning: 'VariableState' was removed, this is just
an alias to 'Variable'. Plase use 'Variable' directly instead.
But since the code needs to work with post-training, which currently
requires flax v0.11, we didn't change code for these warnings.
- Add TrainStateNNX (layers/train_state_nnx.py) with checkpoint and unit tests - Refactor model_creation_utils with create_nnx_abstract_model(); add NNX support to muon_utils - Add get_abstract_state_nnx() and get_nnx_named_sharding_with_scan_axis() to maxtext_utils.py - Wire NNX train state into train.py and train_utils.py with pure_nnx dispatch
a906f15 to
a726aac
Compare
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
NNX Migration Route Map
pure_nnxflag,init_state_fn,TrainStateNNX, NNX utils. Linen workflow unchanged. (PR #3427)get_abstract_state_nnx,get_named_sharding_nnx,set_named_sharding_nnx,get_partition_spec_nnx,get_mesh_from_config. (PR #3470)pure_nnx=Trueenables full NNX training; default remainsFalse.pure_nnx=Trueas default.Description
TrainStateNNXand unit testssrc/maxtext/layers/train_state_nnx.pyimplements theTrainStateNNXcontainer, which holds an NNX model and its Optax optimizer as a single composable unit. Unit tests cover state creation, optimizer step, and Orbax checkpoint round-trip:tests/unit/train_state_nnx_test.pytests/unit/train_state_nnx_checkpoint_test.pyMuon optimizer and model creation utilities
muon_utils.py— updated to support NNX models alongside Linen.model_creation_utils.py— refactored to exposecreate_nnx_abstract_modelandfrom_config, which create and initialize an NNX model from a config without running a full forward pass.End-to-end training loop (
train.py+ supporting modules)The core training loop in
train.pynow dispatches onpure_nnxat every major decision point:sharding.py) —maybe_update_params_sharding_with_optdispatches to a newmaybe_update_params_sharding_with_opt_nnx, which extractsnnx.Param-only shardings from the flatnnx.Statewithout accessing.params.gradient_accumulation.py) — NNX path usesnnx.value_and_gradwithnnx.split/nnx.mergeper microbatch insidejax.lax.scan, carrying non-Paramreststate (RNGs) through the loop.maxtext_utils.py) —get_functional_train_with_signatureandget_functional_eval_with_signatureuse a 2-elementin_shardingstuple(state, batch)for NNX (no rng argument), vs. 3-element for Linen.checkpointing.py) —maybe_save_checkpointconvertsnnx.Stateto a plain dict viastate.to_pure_dict()before Orbax save;load_state_if_possiblerestores viannx.replace_by_pure_dict(abstract_state, dict).Tests
Unit tests:
Checklist
Before submitting this PR, please make sure (put X in square brackets):
gemini-reviewlabel.