-
Notifications
You must be signed in to change notification settings - Fork 608
feat(pt/dpmodel): add use_default_pf #5356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -3189,6 +3189,11 @@ def loss_ener() -> list[Argument]: | |||||||||||||||||||||||||||||||||||||||
| "atomic prefactor force", label="atom_pref", abbr="pf" | ||||||||||||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||||||||||||
| doc_limit_pref_pf = limit_pref("atomic prefactor force") | ||||||||||||||||||||||||||||||||||||||||
| doc_use_default_pf = ( | ||||||||||||||||||||||||||||||||||||||||
| "If true, use default atom_pref of 1.0 for all atoms when atom_pref data is not provided. " | ||||||||||||||||||||||||||||||||||||||||
| "This allows using the prefactor force loss (pf) without requiring atom_pref.npy files in training data. " | ||||||||||||||||||||||||||||||||||||||||
| "When atom_pref.npy is provided, it will be used as-is regardless of this setting." | ||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
3190
to
+3195
|
||||||||||||||||||||||||||||||||||||||||
| ) | |
| doc_limit_pref_pf = limit_pref("atomic prefactor force") | |
| doc_use_default_pf = ( | |
| "If true, use default atom_pref of 1.0 for all atoms when atom_pref data is not provided. " | |
| "This allows using the prefactor force loss (pf) without requiring atom_pref.npy files in training data. " | |
| "When atom_pref.npy is provided, it will be used as-is regardless of this setting." | |
| ) + ( | |
| " When `use_default_pf` is true, `atom_pref.npy` is not required even when the " | |
| "prefactor is non-zero; a default per-atom prefactor of 1.0 will be used when " | |
| "`atom_pref.npy` is missing. When `use_default_pf` is false, `atom_pref.npy` " | |
| "must still be provided whenever the prefactor is non-zero." | |
| ) | |
| doc_limit_pref_pf = limit_pref("atomic prefactor force") | |
| doc_use_default_pf = ( | |
| "If true, use default atom_pref of 1.0 for all atoms when atom_pref data is not provided. " | |
| "This allows using the prefactor force loss (pf) without requiring atom_pref.npy files in training data. " | |
| "When atom_pref.npy is provided, it will be used as-is regardless of this setting. " | |
| "Note: at present this option is only effective for the PyTorch/DPModel backend; " | |
| "other backends (e.g. TensorFlow and Paddle) accept this flag but ignore it." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use_default_pf is accepted here but can be a silent no-op on Paddle backend.
This option is now valid in schema, but the Paddle training path (deepmd/pd/train/training.py Line 1316-Line 1329) forwards loss_params into a loss constructor that does not consume use_default_pf as an active field, so users can set it without effect. Please either implement it in the Paddle loss path or explicitly reject it there to avoid backend-inconsistent behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/utils/argcheck.py` around lines 3307 - 3313, The schema now accepts
use_default_pf but the Paddle training path silently ignores it; update the
Paddle loss handling in deepmd/pd/train/training.py so that when loss_params
(the dict forwarded into the loss constructor) contains "use_default_pf" it is
either honored by passing it into the Paddle loss implementation or rejected
up-front. Specifically, either (A) extend the Paddle loss constructor(s)
referenced where loss_params is passed (look for the loss creation block around
the code forwarding loss_params at lines ~1316-1329) to accept and act on
use_default_pf, or (B) validate and remove/raise on presence of "use_default_pf"
before forwarding (e.g., check loss_params.get("use_default_pf") and raise a
clear error), and ensure the change references the same loss_params variable and
the Paddle training module functions so behavior is consistent with the other
backends.
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -85,6 +85,22 @@ And the `loss` section in the training input script should be set as follows. | |||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| If `atom_pref.npy` is not provided in the training data, one can set `use_default_pf` to `true` to use a default atom preference of 1.0 for all atoms. This allows using the prefactor force loss (`pf` loss) without requiring `atom_pref.npy` files. When `atom_pref.npy` is provided, it will be used as-is regardless of this setting. | ||||||
|
||||||
| If `atom_pref.npy` is not provided in the training data, one can set `use_default_pf` to `true` to use a default atom preference of 1.0 for all atoms. This allows using the prefactor force loss (`pf` loss) without requiring `atom_pref.npy` files. When `atom_pref.npy` is provided, it will be used as-is regardless of this setting. | |
| If `atom_pref.npy` is not provided in the training data and you are using the PyTorch/DP backend, you can set `use_default_pf` to `true` to use a default atom preference of 1.0 for all atoms. This allows using the prefactor force loss (`pf` loss) without requiring `atom_pref.npy` files. When `atom_pref.npy` is provided, it will be used as-is regardless of this setting. For TensorFlow/Paddle backends (for example, when using `EnerStdLoss`), the `use_default_pf` option is accepted but ignored and therefore has no effect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
use_default_pfbehavior change inEnergyLoss.call()(overridingfind_atom_pref) isn’t covered by the existing dpmodel loss unit tests (e.g.,source/tests/common/dpmodel/test_loss_ener.py). Please add a small test case asserting that withfind_atom_pref=0.0anduse_default_pf=True, the pf contribution/metrics are computed, and that withuse_default_pf=Falsethey remain suppressed.