-
Notifications
You must be signed in to change notification settings - Fork 577
feat(dp/pt): add default_fparam #4888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds optional default_fparam across fitting stacks, auto-fills missing fparam in GeneralFitting, exposes has_default_fparam() through atomic models and inference wrappers, conditions data/train/stat behavior on that flag, and threads the new parameter through CLIs, backends (JAX/PD/TF/PT), and tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Data as Data loader
participant Train as Training loop
participant Model as Model wrapper
participant Atomic as AtomicModel
participant Fit as GeneralFitting
Data->>Train: batch (descrpt, maybe fparam, find_fparam)
Train->>Model: call(inputs)
Model->>Atomic: forward(descrpt, fparam?)
Atomic->>Fit: _forward_common(descrpt, fparam)
alt fparam is None and Fit.numb_fparam > 0 and Fit.has_default_fparam()
Note right of Fit: fill missing fparam
Fit->>Fit: tile default_fparam_tensor -> fparam (nf x numb_fparam)
end
Fit-->>Atomic: outputs
Atomic-->>Model: outputs
Model-->>Train: outputs
sequenceDiagram
autonumber
participant Train as Training setup
participant Model as Model wrapper
participant Req as Data requirement builder
Train->>Model: has_default_fparam()
Model-->>Train: bool
Train->>Req: add("fparam", must = not has_default)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🔭 Outside diff range comments (1)
deepmd/pt/model/task/fitting.py (1)
233-246: Store-and-validate default_fparam is mostly correct, but allow only when numb_fparam > 0Currently, default_fparam can be provided even when numb_fparam == 0, which makes has_default_fparam() return True but has no effect at runtime. This can mislead callers and wrappers.
Apply this diff to enforce consistency:
- if self.default_fparam is not None: - if self.numb_fparam > 0: - assert len(self.default_fparam) == self.numb_fparam, ( - "default_fparam length mismatch!" - ) - self.register_buffer( + if self.default_fparam is not None: + if self.numb_fparam <= 0: + raise ValueError( + "default_fparam was provided but numb_fparam == 0; " + "remove default_fparam or set a positive numb_fparam." + ) + if len(self.default_fparam) != self.numb_fparam: + raise ValueError("default_fparam length mismatch!") + self.register_buffer( "default_fparam_tensor", torch.tensor( np.array(self.default_fparam), dtype=self.prec, device=device ), ) else: self.default_fparam_tensor = None
🧹 Nitpick comments (21)
deepmd/dpmodel/fitting/general_fitting.py (1)
97-100: Clarify docstring: expected length and behavior when numb_fparam == 0Minor doc touch-up to set user expectations and prevent misuse.
Apply this diff:
- default_fparam: list[float], optional - The default frame parameter. If set, when `fparam.npy` files are not included in the data system, - this value will be used as the default value for the frame parameter in the fitting net. + default_fparam: list[float], optional + The default frame parameter. If set and `fparam.npy` files are not included in the data system, + this value will be used as the default value for the frame parameter in the fitting net. + Length must equal `numb_fparam`. Ignored when `numb_fparam == 0`.source/tests/universal/dpmodel/fitting/test_fitting.py (1)
55-56: Good addition; consider exercising parity across all fitting param builders.The default_fparam payload here is correct and matches numb_param. For better coverage and parity with the broader API change, consider adding the same key to FittingParamDos, FittingParamDipole, FittingParamPolar, and FittingParamProperty so tests also exercise default_fparam across these fittings.
Also, add a quick assertion to ensure provided default_fparam length equals numb_fparam when numb_param > 0.
Example snippet for other builders (outside this hunk):
"default_fparam": [1.0] * numb_param if numb_param > 0 else None,deepmd/pt/model/task/polarizability.py (2)
78-81: Docstring: clarify shape/consistency constraints for default_fparamAdd that the length of default_fparam must equal numb_fparam and its dtype is float-like. This prevents silent shape mismatches.
- default_fparam: list[float], optional - The default frame parameter. If set, when `fparam.npy` files are not included in the data system, - this value will be used as the default value for the frame parameter in the fitting net. + default_fparam: list[float], optional + The default frame parameter. If set, when `fparam.npy` files are not included in the data system, + this value will be used as the default value for the frame parameter in the fitting net. + When provided, len(default_fparam) must equal `numb_fparam`.
103-103: Tighten type annotation for default_fparamUse Optional[list[float]] to align with the docstring and improve static typing.
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,deepmd/pt/model/task/dipole.py (2)
75-78: Docstring: clarify shape/consistency constraints for default_fparamAs with other tasks, document that the length of default_fparam must equal numb_fparam (float-like elements).
- default_fparam: list[float], optional - The default frame parameter. If set, when `fparam.npy` files are not included in the data system, - this value will be used as the default value for the frame parameter in the fitting net. + default_fparam: list[float], optional + The default frame parameter. If set, when `fparam.npy` files are not included in the data system, + this value will be used as the default value for the frame parameter in the fitting net. + When provided, len(default_fparam) must equal `numb_fparam`.
99-99: Tighten type annotation for default_fparamAlign the signature with the docstring and other modules by using Optional[list[float]].
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,deepmd/entrypoints/test.py (1)
341-341: Normalize find_fparam check and use dict.get for safetyUse the same convention as find_energy/find_force and avoid comparing to a float. This also avoids KeyError if the key is missing.
- if dp.get_dim_fparam() > 0 and test_data["find_fparam"] != 0.0: + if dp.get_dim_fparam() > 0 and test_data.get("find_fparam", 0) == 1:deepmd/pt/model/task/property.py (1)
94-96: Expose accurate type and document the new parameter
- Type: annotate as Optional[list[float]] for clarity and consistency with DP.
- Docs: please add default_fparam to the class docstring in this file as you did on the DP side, so users of the PT API see it.
Apply this diff for typing:
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,If helpful, I can draft the docstring stanza for default_fparam.
deepmd/dpmodel/fitting/ener_fitting.py (1)
49-50: Tighten default_fparam typingConsider annotating element type for clarity and consistency with other modules.
Apply this diff:
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,deepmd/pt/model/task/ener.py (2)
59-61: Use a precise type hint for default_fparamAlign the type hint with the rest of the codebase/docstrings (list[float] instead of a raw list). This improves static checking and consistency.
Apply:
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,
44-61: Document the new parameter in class/init docstringEnergyFittingNet exposes default_fparam but does not document it. Please add a short parameter note to make the API discoverable (mirroring the InvarFitting docs).
Example (add to the init docstring or class docstring):
- default_fparam: list[float], optional
The default frame parameter. If set, when fparam.npy files are not included in the data system, this value will be used as the default value for the frame parameter in the fitting net.deepmd/pt/model/task/invar_fitting.py (1)
179-181: Fix minor typo in docstring ("alculate" -> "calculate")Minor polish.
- """Based on embedding net output, alculate total energy. + """Based on embedding net output, calculate total energy.deepmd/dpmodel/fitting/dos_fitting.py (1)
49-50: Tighten type hint to list[float] for consistencyUse list[float] to match other modules and improve type safety.
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,deepmd/dpmodel/fitting/dipole_fitting.py (1)
37-37: Typo in class docstring (“diploe”)Fix the spelling to “dipole”.
- r"""Fitting rotationally equivariant diploe of the system. + r"""Fitting rotationally equivariant dipole of the system.deepmd/utils/argcheck.py (6)
1778-1783: Doc incorrectly marks default_fparam as PT-onlydefault_fparam is supported beyond PT (e.g., dpmodel/dpmodel general_fitting and dipole_fitting accept it). Remove the PT-only qualifier from the doc here to avoid misleading users.
- Argument( - "default_fparam", - list[float], - optional=True, - default=None, - doc=doc_only_pt_supported + doc_default_fparam, - ), + Argument( + "default_fparam", + list[float], + optional=True, + default=None, + doc=doc_default_fparam, + ),
1860-1865: Doc incorrectly marks default_fparam as PT-only (DOS fitting)Same issue; please drop the PT-only qualifier.
- Argument( - "default_fparam", - list[float], - optional=True, - default=None, - doc=doc_only_pt_supported + doc_default_fparam, - ), + Argument( + "default_fparam", + list[float], + optional=True, + default=None, + doc=doc_default_fparam, + ),
1921-1926: Doc incorrectly marks default_fparam as PT-only (Property fitting)Same fix as above.
- Argument( - "default_fparam", - list[float], - optional=True, - default=None, - doc=doc_only_pt_supported + doc_default_fparam, - ), + Argument( + "default_fparam", + list[float], + optional=True, + default=None, + doc=doc_default_fparam, + ),
2005-2010: Doc incorrectly marks default_fparam as PT-only (Polar fitting)Same fix as above.
- Argument( - "default_fparam", - list[float], - optional=True, - default=None, - doc=doc_only_pt_supported + doc_default_fparam, - ), + Argument( + "default_fparam", + list[float], + optional=True, + default=None, + doc=doc_default_fparam, + ),
2084-2089: Doc incorrectly marks default_fparam as PT-only (Dipole fitting)Same fix as above.
- Argument( - "default_fparam", - list[float], - optional=True, - default=None, - doc=doc_only_pt_supported + doc_default_fparam, - ), + Argument( + "default_fparam", + list[float], + optional=True, + default=None, + doc=doc_default_fparam, + ),
1747-1751: Consider clarifying numb_fparam doc to reflect default_fparamThe current text (“If set to >0, file fparam.npy should be included...”) is now conditionally true. Recommend updating doc_numb_fparam to mention that fparam.npy is not required when default_fparam is provided.
If you want, I can open a follow-up PR to adjust the wording consistently across ener/dos/property/polar/dipole.
source/tests/consistent/fitting/test_ener.py (1)
73-76: LGTM: tests correctly exercise explicit vs default fparam paths
- Parameterization with (numb_fparam, default_fparam) is clear.
- Execution passes fparam only when required (numb_fparam > 0 and default_fparam is None), validating default path.
- PD backend skip condition updated appropriately.
Minor nit: repeated tuple unpacking increases verbosity; consider a small helper to unpack self.param once.
For example:
def _unpack(self): resnet_dt, precision, mixed_types, (numb_fparam, default_fparam), (numb_aparam, use_aparam_as_mask), atom_ener = self.param return resnet_dt, precision, mixed_types, numb_fparam, default_fparam, numb_aparam, use_aparam_as_mask, atom_enerAlso applies to: 85-99, 198-215, 227-241, 243-261, 263-281, 283-310
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (29)
deepmd/dpmodel/atomic_model/base_atomic_model.py(1 hunks)deepmd/dpmodel/atomic_model/dp_atomic_model.py(1 hunks)deepmd/dpmodel/fitting/dipole_fitting.py(3 hunks)deepmd/dpmodel/fitting/dos_fitting.py(2 hunks)deepmd/dpmodel/fitting/ener_fitting.py(2 hunks)deepmd/dpmodel/fitting/general_fitting.py(7 hunks)deepmd/dpmodel/fitting/invar_fitting.py(3 hunks)deepmd/dpmodel/fitting/polarizability_fitting.py(3 hunks)deepmd/dpmodel/fitting/property_fitting.py(3 hunks)deepmd/dpmodel/infer/deep_eval.py(1 hunks)deepmd/dpmodel/model/make_model.py(1 hunks)deepmd/entrypoints/test.py(2 hunks)deepmd/infer/deep_eval.py(2 hunks)deepmd/pt/infer/deep_eval.py(1 hunks)deepmd/pt/model/atomic_model/base_atomic_model.py(1 hunks)deepmd/pt/model/atomic_model/dp_atomic_model.py(1 hunks)deepmd/pt/model/model/make_model.py(1 hunks)deepmd/pt/model/task/dipole.py(3 hunks)deepmd/pt/model/task/dos.py(2 hunks)deepmd/pt/model/task/ener.py(2 hunks)deepmd/pt/model/task/fitting.py(7 hunks)deepmd/pt/model/task/invar_fitting.py(3 hunks)deepmd/pt/model/task/polarizability.py(3 hunks)deepmd/pt/model/task/property.py(2 hunks)deepmd/pt/train/training.py(2 hunks)deepmd/pt/utils/stat.py(1 hunks)deepmd/utils/argcheck.py(10 hunks)source/tests/consistent/fitting/test_ener.py(16 hunks)source/tests/universal/dpmodel/fitting/test_fitting.py(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (13)
deepmd/pt/model/atomic_model/dp_atomic_model.py (10)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)deepmd/pt/model/task/fitting.py (1)
has_default_fparam(441-443)
deepmd/dpmodel/infer/deep_eval.py (8)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)
deepmd/pt/infer/deep_eval.py (10)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)deepmd/pt/model/task/fitting.py (1)
has_default_fparam(441-443)
deepmd/pt/model/atomic_model/base_atomic_model.py (10)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)deepmd/pt/model/task/fitting.py (1)
has_default_fparam(441-443)
deepmd/pt/model/model/make_model.py (9)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/task/fitting.py (1)
has_default_fparam(441-443)
deepmd/dpmodel/model/make_model.py (10)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)deepmd/pt/model/task/fitting.py (1)
has_default_fparam(441-443)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (8)
deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)
deepmd/dpmodel/atomic_model/base_atomic_model.py (4)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)
deepmd/pt/train/training.py (9)
deepmd/utils/compat.py (1)
_model(67-88)deepmd/dpmodel/atomic_model/dp_atomic_model.py (2)
get_dim_fparam(228-230)has_default_fparam(236-238)deepmd/dpmodel/fitting/general_fitting.py (2)
get_dim_fparam(226-228)has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (2)
get_dim_fparam(115-117)has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (2)
get_dim_fparam(559-561)has_default_fparam(567-569)deepmd/pt/model/atomic_model/dp_atomic_model.py (2)
get_dim_fparam(333-335)has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (2)
get_dim_fparam(182-184)has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (2)
get_dim_fparam(522-524)has_default_fparam(527-529)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)
deepmd/entrypoints/test.py (7)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (2)
get_dim_fparam(228-230)has_default_fparam(236-238)deepmd/dpmodel/fitting/general_fitting.py (2)
get_dim_fparam(226-228)has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (2)
get_dim_fparam(115-117)has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (2)
get_dim_fparam(559-561)has_default_fparam(567-569)deepmd/pt/model/atomic_model/dp_atomic_model.py (2)
get_dim_fparam(333-335)has_default_fparam(337-339)deepmd/pt/model/model/make_model.py (2)
get_dim_fparam(522-524)has_default_fparam(527-529)deepmd/pt/model/task/fitting.py (2)
get_dim_fparam(437-439)has_default_fparam(441-443)
deepmd/infer/deep_eval.py (9)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)deepmd/pt/model/task/fitting.py (1)
has_default_fparam(441-443)
deepmd/dpmodel/fitting/general_fitting.py (11)
source/tests/tf/common.py (2)
numb_fparam(906-907)numb_fparam(1107-1108)deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)deepmd/pt/model/task/fitting.py (1)
has_default_fparam(441-443)
deepmd/pt/model/task/fitting.py (10)
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/fitting/general_fitting.py (1)
has_default_fparam(234-236)deepmd/dpmodel/infer/deep_eval.py (1)
has_default_fparam(123-125)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)deepmd/infer/deep_eval.py (2)
has_default_fparam(163-165)has_default_fparam(423-425)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(337-339)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(190-192)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(527-529)
🔇 Additional comments (31)
deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
91-94: LGTM: base API addition is consistent and safe defaultProviding a default False keeps backward compatibility and aligns with the DP/PT counterparts that delegate to fitting layers.
deepmd/infer/deep_eval.py (2)
163-166: LGTM: backend default is explicit and backward compatibleProviding a non-abstract default that returns False avoids breaking existing backends and lets upgraded backends override as needed.
423-426: LGTM: high-level forwarding preserves single entry pointForwarding to the backend keeps the API coherent at the DeepEval facade level.
deepmd/dpmodel/fitting/general_fitting.py (3)
126-137: Constructor wiring for default_fparam looks goodParameter addition and storage (
self.default_fparam) are consistent with the feature and remain backward compatible.
234-237: LGTM: public query API is appropriateA simple boolean predicate keeps the check cheap and composable by wrappers.
336-336: LGTM: serialization includes default_fparamIncluding
default_fparamin the serialized payload ensures reproducibility. Constructor defaults keep deserialization backward compatible when the key is missing.deepmd/dpmodel/infer/deep_eval.py (1)
123-126: Expose has_default_fparam in NumPy DeepEval — LGTM.Delegating to self.dp.has_default_fparam() is consistent with the new API chain (CM → AtomicModel → Fitting). This enables external tooling/tests to conditionally supply fparam.
deepmd/pt/infer/deep_eval.py (1)
190-193: Expose has_default_fparam in PT DeepEval — LGTM.Consistent with NumPy backend and PT wrappers; unblocks conditional fparam handling in evaluation and tests.
deepmd/dpmodel/model/make_model.py (1)
567-570: Forwarding has_default_fparam through CM — LGTM.This keeps the model wrapper aligned with the API surface and enables higher-level wrappers to query default fparam availability.
deepmd/pt/model/task/polarizability.py (1)
145-146: LGTM: correct propagation to baseForwarding default_fparam to the base class aligns with the new fitting API and serialization paths.
deepmd/pt/model/model/make_model.py (1)
526-530: LGTM: exposes has_default_fparam at the model wrapper levelThis keeps the PT wrapper API consistent with atomic models and enables frontends/tests to query default fparam availability.
deepmd/pt/model/task/dipole.py (1)
121-121: LGTM: correct propagation to baseForwarding default_fparam to the base fitting preserves default injection and serialization.
deepmd/entrypoints/test.py (1)
301-306: has_default_fparam is routed to backend implementations — no change neededVerified: dp.has_default_fparam() delegates to the concrete backend and the PT/DP backends further delegate to model/atomic-model implementations, so the entrypoint's conditional will only require fparam when the concrete backend/model reports no default.
Relevant locations:
- deepmd/entrypoints/test.py:304 — must=not dp.has_default_fparam()
- deepmd/infer/deep_eval.py:163 — DeepEvalBackend.has_default_fparam() defaults to False
- deepmd/infer/deep_eval.py:425 — DeepEval.has_default_fparam() -> self.deep_eval.has_default_fparam()
- deepmd/pt/infer/deep_eval.py:190 — PT backend delegates to dp.model["Default"].has_default_fparam()
- deepmd/dpmodel/infer/deep_eval.py:123 — DP backend delegates to dp.has_default_fparam()
- deepmd/pt/model/atomic_model/dp_atomic_model.py:337 and deepmd/dpmodel/atomic_model/dp_atomic_model.py:236 — model-level delegation to fitting layers
No fixes required.
deepmd/dpmodel/fitting/property_fitting.py (2)
68-71: Docstring addition for default_fparam is clear and helpfulThe parameter description is concise and aligns with the new behavior.
117-118: Forwarding default_fparam to the base class looks correctNo further action needed here.
deepmd/pt/train/training.py (1)
1313-1317: Conditional fparam requirement integrates correctly with has_default_fparam()Tying must to not _model.has_default_fparam() matches the intended semantics. Looks good.
deepmd/pt/model/task/property.py (1)
115-116: Forwarding default_fparam to super() is correctThe PT wrapper cleanly propagates the new option.
deepmd/dpmodel/fitting/ener_fitting.py (1)
74-75: LGTM on forwarding default_fparamThe parameter is correctly plumbed to the base class.
deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
236-239: has_default_fparam() delegation is correct and completes the public APIClean delegation to the fitting layer; matches the rest of the stack.
deepmd/pt/model/task/ener.py (1)
78-80: LGTM: forwarding default_fparam to baseForwarding the new argument to the superclass is correct and consistent with the broader default_fparam rollout.
deepmd/dpmodel/fitting/invar_fitting.py (2)
113-116: Docstring addition is clear and helpfulThe documentation for default_fparam is concise and matches intended behavior. Good addition.
144-145: GeneralFitting already validates length and serializes default_fparam — no change requiredShort check: I scanned the implementations; both the PT and DP/NumPy GeneralFitting implementations assert the length of default_fparam against numb_fparam (when provided) and include default_fparam in their serialize() output.
Files/locations found:
- deepmd/pt/model/task/fitting.py
- Validation + buffer registration: lines ~307–316 (assert len(self.default_fparam) == self.numb_fparam; register default_fparam_tensor)
- serialize includes "default_fparam": lines ~397–400
- has_default_fparam(): lines ~441–443
- deepmd/dpmodel/fitting/general_fitting.py
- Validation + tensor creation: lines ~186–191 (assert len(self.default_fparam) == self.numb_fparam; self.default_fparam_tensor = np.array(...))
- serialize includes "default_fparam": lines ~336–337
- default_fparam usage when building fparam tiles: lines ~428–431
- deepmd/dpmodel/fitting/invar_fitting.py (file under review)
- default_fparam parameter declared: lines ~144–145
- threaded to base ctor: lines ~180–181
Conclusion: invar_fitting passing default_fparam to the base is sufficient — the base classes perform the requested validation and serialization.
deepmd/pt/model/task/invar_fitting.py (2)
83-86: Docstring addition is consistent with DPModel variantClear explanation of default_fparam and its effect. Matches the DPModel InvarFitting docs.
109-110: LGTM: parameter threading is correct; consider keeping type hints consistent codebase-wideThe parameter is added and forwarded correctly. It's already typed as Optional[list[float]], which is consistent with other modules.
Also applies to: 135-136
deepmd/dpmodel/fitting/dos_fitting.py (1)
74-75: LGTM: default_fparam correctly forwarded to baseForwarding to super().init is correct and aligns with the new API.
deepmd/dpmodel/fitting/polarizability_fitting.py (2)
93-96: Docstring addition is clear and mirrors other fittingsThe new parameter is well-documented and consistent with Invar/Ener docs.
123-124: LGTM: parameter added and forwarded properlyThe type hint and forwarding to the base class look correct.
Also applies to: 171-172
deepmd/dpmodel/fitting/dipole_fitting.py (2)
87-90: Good addition: documents default_fparam usageDocstring clearly states the behavior when fparam is missing. This aligns with the base GeneralFitting behavior.
116-117: Constructor and forwarding look correct
- Added default_fparam: Optional[list[float]] in the signature.
- Properly forwarded to the base via super(..., default_fparam=default_fparam).
Also applies to: 151-152
deepmd/pt/model/task/fitting.py (2)
207-210: Docstring addition aligns with behaviorThe new default_fparam doc reads well and matches the intended behavior.
542-548: Default-fparam fill path looks goodFills fparam from the registered buffer when not provided by the caller. Shapes and tiling are correct given nf x nloc x nd descriptors.
| @property | ||
| def skip_tf(self) -> bool: | ||
| ( | ||
| resnet_dt, |
Check notice
Code scanning / CodeQL
Unused local variable Note test
| def skip_tf(self) -> bool: | ||
| ( | ||
| resnet_dt, | ||
| precision, |
Check notice
Code scanning / CodeQL
Unused local variable Note test
| ( | ||
| resnet_dt, | ||
| precision, | ||
| mixed_types, |
Check notice
Code scanning / CodeQL
Unused local variable Note test
| precision, | ||
| mixed_types, | ||
| (numb_fparam, default_fparam), | ||
| (numb_aparam, use_aparam_as_mask), |
Check notice
Code scanning / CodeQL
Unused local variable Note test
| precision, | ||
| mixed_types, | ||
| (numb_fparam, default_fparam), | ||
| (numb_aparam, use_aparam_as_mask), |
Check notice
Code scanning / CodeQL
Unused local variable Note test
| mixed_types, | ||
| (numb_fparam, default_fparam), | ||
| (numb_aparam, use_aparam_as_mask), | ||
| atom_ener, |
Check notice
Code scanning / CodeQL
Unused local variable Note test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
deepmd/pd/model/task/fitting.py (1)
142-147: Bug: using Tensor.device in Paddle (should be place) causes runtime error
aparam_std.deviceis invalid in Paddle; useenv.DEVICEoraparam_std.place. This will throw at stats time.Apply:
- aparam_std = paddle.where( - aparam_std < protection, - paddle.to_tensor( - protection, dtype=aparam_std.dtype, place=aparam_std.device - ), - aparam_std, - ) + aparam_std = paddle.where( + aparam_std < protection, + paddle.to_tensor(protection, dtype=aparam_std.dtype, place=env.DEVICE), + aparam_std, + )
🧹 Nitpick comments (8)
deepmd/tf/fit/ener.py (1)
122-123: Clarify TF limitation in docstring.Consider stating “Passing a non-None value will raise ValueError in the TensorFlow backend; the field is only serialized for cross-backend compatibility.” This avoids ambiguity.
source/tests/consistent/fitting/test_ener.py (1)
196-208: Guard TF path explicitly.Optionally assert default_fparam is None in build_tf to fail fast if skip_tf is bypassed.
Apply:
def build_tf(self, obj: Any, suffix: str) -> tuple[list, dict]: @@ - return self.build_tf_fitting( + # TF backend does not support default_fparam; ensure it isn't used here. + assert default_fparam is None, "TF backend does not support default_fparam" + return self.build_tf_fitting(deepmd/jax/fitting/fitting.py (1)
31-47: Verify default_fparam vs. default_fparam_tensor mapping and JAX conversion
- In the DP-model’s
general_fitting.py, bothdefault_fparam(a Python list) and thendefault_fparam_tensor(annp.ndarraybuilt vianp.array(self.default_fparam, dtype=self.prec)) are assigned. The JAX setter indeepmd/jax/fitting/fitting.pyonly special-cases"default_fparam_tensor", so a raw Python list fordefault_fparamwill still be attached to the module (unused) before the properly converted JAX array is set fordefault_fparam_tensor.
• If you want to avoid any exposure of the unconverted list on the JAX side (and ensure naming is uniform across backends), consider extendingsetattr_for_general_fittingto also catch"default_fparam", e.g. map it directly intoto_jax_array+ArrayAPIVariable(or normalize it upstream before module construction).- The
to_jax_arrayhelper is defined as:sodef to_jax_array(array: Optional[np.ndarray]) -> Optional[jnp.ndarray]: if array is None: return None return jnp.array(array)to_jax_array(None)is indeed a no-op (returnsNone, no exception). For non-None, it preserves the NumPy dtype (which was set upstream viadtype=self.prec) and the shape(numb_fparam,). No further dtype or shape coercion is required here—enforce any desired precision at the point where the NumPy array is first constructed.Files to review:
deepmd/jax/fitting/fitting.py, aroundsetattr_for_general_fittingdeepmd/dpmodel/fitting/general_fitting.py, around thedefault_fparam→default_fparam_tensorlogicdeepmd/pd/model/task/fitting.py (5)
186-190: Clarify unsupported default_fparam and prevent silent misconfigurationDocstring says PD does not support default_fparam, but the class accepts/serializes it. Add a runtime warning (and light validation) so users aren’t misled when passing it.
Apply:
@@ use_aparam_as_mask: bool = False, - default_fparam: Optional[list[float]] = None, + default_fparam: Optional[list[float]] = None, **kwargs, @@ - self.default_fparam = default_fparam + self.default_fparam = default_fparam + if default_fparam is not None: + if self.numb_fparam == 0: + log.warning( + "default_fparam is provided but numb_fparam=0. It will be ignored in Paddle backend." + ) + elif len(default_fparam) != self.numb_fparam: + raise ValueError( + f"default_fparam length {len(default_fparam)} != numb_fparam {self.numb_fparam}" + ) + else: + log.warning( + "default_fparam is currently ignored at runtime in Paddle backend; it is serialized only." + )
240-241: Validate and warn on default_fparam usage in PD backendWithout checks, a mismatched-length default can slip through and confuse users when PD still requires fparam at runtime.
(Use the diff from the previous comment; it covers this location.)
Also applies to: 253-254
557-560: Nit: error message uses braces without f-stringThe message prints literals instead of values and passes multiple args to ValueError. Make it a single f-string.
Apply:
- raise ValueError( - "get an input fparam of dim {fparam.shape[-1]}, ", - "which is not consistent with {self.numb_fparam}.", - ) + raise ValueError( + f"get an input fparam of dim {fparam.shape[-1]}, which is not consistent with {self.numb_fparam}." + )
224-236: Avoid mutable default arguments
neuron,exclude_typeshere and in reinit_exclude use mutable defaults. Prefer None + assignment to prevent shared state across instances.Apply:
- neuron: list[int] = [128, 128, 128], + neuron: Optional[list[int]] = None, @@ - exclude_types: list[int] = [], + exclude_types: Optional[list[int]] = None, @@ - self.neuron = neuron + self.neuron = [128, 128, 128] if neuron is None else neuron @@ - self.reinit_exclude(exclude_types) + self.reinit_exclude([] if exclude_types is None else exclude_types)And:
- def reinit_exclude( - self, - exclude_types: list[int] = [], - ) -> None: + def reinit_exclude(self, exclude_types: Optional[list[int]] = None) -> None: - self.exclude_types = exclude_types + self.exclude_types = [] if exclude_types is None else exclude_typesAlso applies to: 345-349
377-418: Gate serialization ofdefault_fparamto preserve backward compatibilityOnly emit the new
default_fparamfield when it’s notNone, so older loaders (still expecting@version: 3without that kwarg) won’t break. You can keep the version at 3, or bump to 4 if you decide to require the newer format.• In deepmd/pd/model/task/fitting.py, update
serialize():def serialize(self) -> dict: - return { + data = { "@class": "Fitting", "@version": 3, "var_name": self.var_name, "ntypes": self.ntypes, "dim_descrpt": self.dim_descrpt, "neuron": self.neuron, "resnet_dt": self.resnet_dt, "numb_fparam": self.numb_fparam, "numb_aparam": self.numb_aparam, "dim_case_embd": self.dim_case_embd, - "default_fparam": self.default_fparam, "activation_function": self.activation_function, "precision": self.precision, "mixed_types": self.mixed_types, "nets": self.filter_layers.serialize(), "rcond": self.rcond, "exclude_types": self.exclude_types, "@variables": { @@ "use_aparam_as_mask": self.use_aparam_as_mask, - "spin": None, - } + "spin": None, + } + if self.default_fparam is not None: + data["default_fparam"] = self.default_fparam + return data• Review other backends (PT: deepmd/pt/model/task/fitting.py; TF: deepmd/tf/fit/… Fitting serializers) and apply the same pattern if they serialize
default_fparam.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
deepmd/dpmodel/fitting/general_fitting.py(9 hunks)deepmd/jax/fitting/fitting.py(1 hunks)deepmd/pd/model/task/fitting.py(4 hunks)deepmd/pt/model/task/fitting.py(9 hunks)deepmd/tf/fit/ener.py(4 hunks)source/tests/consistent/fitting/test_ener.py(16 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- deepmd/dpmodel/fitting/general_fitting.py
- deepmd/pt/model/task/fitting.py
🧰 Additional context used
🧬 Code graph analysis (1)
source/tests/consistent/fitting/test_ener.py (4)
source/tests/consistent/descriptor/test_se_e2_a.py (2)
skip_tf(122-130)skip_tf(366-367)source/tests/consistent/fitting/test_property.py (1)
skip_tf(106-107)source/tests/consistent/model/test_ener.py (2)
skip_tf(140-144)skip_tf(362-364)deepmd/pt/utils/utils.py (3)
to_numpy_array(207-207)to_numpy_array(211-211)to_numpy_array(214-230)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Test Python (1, 3.9)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Test C++ (false)
- GitHub Check: Test C++ (true)
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Analyze (python)
🔇 Additional comments (4)
deepmd/tf/fit/ener.py (2)
177-179: Signature change looks good; TF accepts but will reject non-None.No functional concerns with adding the kwarg at this layer for compatibility with arg schemas.
930-930: Serialize default_fparam for schema parity.Including the field (even as None) is fine for schema consistency with other backends.
source/tests/consistent/fitting/test_ener.py (1)
73-76: Threading and skip logic LGTM.
- Param threading of (numb_fparam, default_fparam) across data/evaluators is consistent.
- Backend skips align with current support: PT/DP/JAX/Strict exercise default path; TF/PD skipped when default_fparam is set.
Also applies to: 84-95, 106-110, 120-126, 133-140, 141-152, 181-189, 196-208, 215-227, 244-252, 260-271, 280-291, 300-312, 337-340, 357-360
deepmd/jax/fitting/fitting.py (1)
38-43: LGTM: default_fparam_tensor now gets JAX-converted and wrapped consistently.Matches existing handling for fparam/aparam stats and keeps setattr logic uniform.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## devel #4888 +/- ##
==========================================
- Coverage 84.28% 84.26% -0.03%
==========================================
Files 705 705
Lines 69097 69176 +79
Branches 3572 3573 +1
==========================================
+ Hits 58239 58290 +51
- Misses 9717 9745 +28
Partials 1141 1141 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Duo <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
deepmd/pt/utils/stat.py (2)
56-56: torch.device is not a context manager; this will raise at runtimeReplace the with-block with a plain block (remove the context) and keep using dict_to_device(sys_stat) for placement. DataLoader yields CPU tensors by default.
85-92: Fix out-of-scope variable in isinstance checkdd is from the previous loop; using it here is incorrect and can mis-detect types. Check the aggregated list element instead.
- elif isinstance(stat_data[dd], torch.Tensor): + elif isinstance(sys_stat[key][0], torch.Tensor): sys_stat[key] = torch.cat(sys_stat[key], dim=0)deepmd/dpmodel/fitting/general_fitting.py (1)
365-375: Deserialization pops a non-existent key 'type' — will raise KeyError.Your serialize() doesn’t include "type". Make popping safe.
Apply:
- data.pop("@class") - data.pop("type") + data.pop("@class", None) + data.pop("type", None)
♻️ Duplicate comments (5)
deepmd/pt/utils/stat.py (1)
65-72: Avoid Tensor/ndarray truthiness in find_fparam check; cast to float and pop safelyComparing a torch scalar (or 0-d ndarray) to 0.0 inside an if causes an “ambiguous truth value” error. Cast first; also use pop(..., None).
- if ( - "find_fparam" in stat_data - and "fparam" in stat_data - and stat_data["find_fparam"] == 0.0 - ): - # for model using default fparam - stat_data.pop("fparam") - stat_data.pop("find_fparam") + # For models using default fparam, drop fparam/find_fparam from stat payload + if "fparam" in stat_data and "find_fparam" in stat_data: + try: + find_fp = float(to_numpy_array(stat_data["find_fparam"])) + except Exception: + find_fp = None + if find_fp == 0.0: + stat_data.pop("fparam", None) + stat_data.pop("find_fparam", None)deepmd/tf/fit/ener.py (1)
892-895: Ignore default_fparam in TF deserialize to avoid cross-backend load failures.Without this, models serialized by other backends with default_fparam will fail to load in TF. Same ask as earlier review.
Apply:
def deserialize(cls, data: dict, suffix: str = ""): @@ - data = data.copy() - check_version_compatibility(data.pop("@version", 1), 4, 1) + data = data.copy() + check_version_compatibility(data.pop("@version", 1), 4, 1) + # TF does not support default_fparam; ignore it to allow cross-backend loading. + if data.get("default_fparam") is not None: + log.warning("Ignoring default_fparam in TensorFlow; explicit fparam will be required.") + data["default_fparam"] = None fitting = cls(**data)deepmd/dpmodel/fitting/general_fitting.py (2)
186-193: Replace assert with explicit validation and normalize shape.Avoid S101 in production, give clear errors, and ensure 1D dtype-consistent tensor. (Same as prior suggestion.)
Apply:
- if self.default_fparam is not None: - if self.numb_fparam > 0: - assert len(self.default_fparam) == self.numb_fparam, ( - "default_fparam length mismatch!" - ) - self.default_fparam_tensor = np.array(self.default_fparam, dtype=self.prec) - else: - self.default_fparam_tensor = None + if self.default_fparam is not None: + if self.numb_fparam > 0 and len(self.default_fparam) != self.numb_fparam: + raise ValueError( + f"default_fparam length mismatch: expected {self.numb_fparam}, got {len(self.default_fparam)}" + ) + self.default_fparam_tensor = np.asarray(self.default_fparam, dtype=self.prec).reshape(-1) + else: + self.default_fparam_tensor = None
430-435: Use xp.asarray and explicit error when filling defaults (avoid assert).Prevents obscure failures when default is missing and casts to active array namespace. (Same as prior suggestion.)
Apply:
- if self.numb_fparam > 0 and fparam is None: - # use default fparam - assert self.default_fparam_tensor is not None - fparam = xp.tile( - xp.reshape(self.default_fparam_tensor, (1, self.numb_fparam)), (nf, 1) - ) + if self.numb_fparam > 0 and fparam is None: + # use default fparam + if self.default_fparam_tensor is None: + raise ValueError( + "fparam is required but not provided and no default_fparam is configured." + ) + default_fp = xp.asarray(self.default_fparam_tensor) + fparam = xp.tile(xp.reshape(default_fp, (1, self.numb_fparam)), (nf, 1))deepmd/pt/model/task/fitting.py (1)
446-449: has_default_fparam should also check numb_fparam > 0.Prevents reporting defaults when fparam dimension is zero. (Same as earlier feedback.)
Apply:
- def has_default_fparam(self) -> bool: - """Check if the fitting has default frame parameters.""" - return self.default_fparam is not None + def has_default_fparam(self) -> bool: + """Check if the fitting has default frame parameters.""" + return self.default_fparam is not None and self.numb_fparam > 0
🧹 Nitpick comments (4)
deepmd/tf/fit/ener.py (1)
202-205: Rejecting default_fparam in TF is fine, but prefer NotImplementedError.Minor: NotImplementedError communicates “unsupported in this backend” better than ValueError. No behavior change.
Apply:
- if self.default_fparam is not None: - raise ValueError("default_fparam is not supported in TensorFlow.") + if self.default_fparam is not None: + raise NotImplementedError("default_fparam is not supported in TensorFlow.")deepmd/dpmodel/fitting/general_fitting.py (1)
234-237: Align has_default_fparam with fparam dimensionality.Return True only when there’s a non-empty default and fparam is enabled.
Apply:
- def has_default_fparam(self) -> bool: - """Check if the fitting has default frame parameters.""" - return self.default_fparam is not None + def has_default_fparam(self) -> bool: + """Check if the fitting has default frame parameters.""" + return self.default_fparam is not None and self.numb_fparam > 0deepmd/pt/model/task/fitting.py (2)
583-586: Fix ValueError message formatting (currently passed as a tuple, and missing f-strings).Combine into one f-string so the message is correct.
Apply:
- raise ValueError( - "get an input fparam of dim {fparam.shape[-1]}, ", - "which is not consistent with {self.numb_fparam}.", - ) + raise ValueError( + f"get an input fparam of dim {fparam.shape[-1]}, which is not consistent with {self.numb_fparam}." + )
609-611: Unify ValueError formatting for aparam mismatch.Avoid tuple args; use a single f-string.
Apply:
- raise ValueError( - f"get an input aparam of dim {aparam.shape[-1]}, ", - f"which is not consistent with {self.numb_aparam}.", - ) + raise ValueError( + f"get an input aparam of dim {aparam.shape[-1]}, which is not consistent with {self.numb_aparam}." + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (19)
deepmd/dpmodel/fitting/general_fitting.py(10 hunks)deepmd/dpmodel/infer/deep_eval.py(1 hunks)deepmd/entrypoints/test.py(2 hunks)deepmd/infer/deep_eval.py(2 hunks)deepmd/pd/model/task/fitting.py(5 hunks)deepmd/pt/infer/deep_eval.py(1 hunks)deepmd/pt/model/atomic_model/base_atomic_model.py(1 hunks)deepmd/pt/model/atomic_model/dp_atomic_model.py(1 hunks)deepmd/pt/model/model/make_model.py(1 hunks)deepmd/pt/model/task/dipole.py(3 hunks)deepmd/pt/model/task/ener.py(2 hunks)deepmd/pt/model/task/fitting.py(10 hunks)deepmd/pt/model/task/invar_fitting.py(3 hunks)deepmd/pt/model/task/polarizability.py(3 hunks)deepmd/pt/model/task/property.py(2 hunks)deepmd/pt/train/training.py(2 hunks)deepmd/pt/utils/stat.py(1 hunks)deepmd/tf/fit/ener.py(6 hunks)deepmd/utils/argcheck.py(10 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
- deepmd/pt/model/atomic_model/base_atomic_model.py
- deepmd/entrypoints/test.py
- deepmd/pt/model/task/ener.py
- deepmd/dpmodel/infer/deep_eval.py
- deepmd/pt/train/training.py
- deepmd/pt/model/task/invar_fitting.py
- deepmd/utils/argcheck.py
- deepmd/pt/model/task/property.py
- deepmd/pd/model/task/fitting.py
- deepmd/infer/deep_eval.py
- deepmd/pt/model/task/dipole.py
- deepmd/pt/model/atomic_model/dp_atomic_model.py
- deepmd/pt/model/task/polarizability.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.py: Run ruff check and ruff format on all Python code before committing
Ensure Python code is formatted with ruff’s formatter
Files:
deepmd/pt/infer/deep_eval.pydeepmd/pt/model/model/make_model.pydeepmd/pt/utils/stat.pydeepmd/tf/fit/ener.pydeepmd/pt/model/task/fitting.pydeepmd/dpmodel/fitting/general_fitting.py
🧬 Code graph analysis (4)
deepmd/pt/infer/deep_eval.py (7)
deepmd/pt/model/task/fitting.py (1)
has_default_fparam(446-448)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(341-343)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(529-531)deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)
deepmd/pt/model/model/make_model.py (7)
deepmd/pt/model/task/fitting.py (1)
has_default_fparam(446-448)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(221-223)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(341-343)deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)
deepmd/tf/fit/ener.py (2)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
deepmd/pt/model/task/fitting.py (8)
source/tests/tf/common.py (2)
numb_fparam(906-907)numb_fparam(1107-1108)deepmd/pt/infer/deep_eval.py (1)
has_default_fparam(221-223)deepmd/pt/model/atomic_model/base_atomic_model.py (1)
has_default_fparam(138-140)deepmd/pt/model/atomic_model/dp_atomic_model.py (1)
has_default_fparam(341-343)deepmd/pt/model/model/make_model.py (1)
has_default_fparam(529-531)deepmd/dpmodel/atomic_model/dp_atomic_model.py (1)
has_default_fparam(236-238)deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
has_default_fparam(91-93)deepmd/dpmodel/model/make_model.py (1)
has_default_fparam(567-569)
🪛 Ruff (0.12.2)
deepmd/tf/fit/ener.py
204-204: Avoid specifying long messages outside the exception class
(TRY003)
deepmd/pt/model/task/fitting.py
312-312: Use of assert detected
(S101)
555-555: Use of assert detected
(S101)
deepmd/dpmodel/fitting/general_fitting.py
188-188: Use of assert detected
(S101)
432-432: Use of assert detected
(S101)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (27)
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Test Python (1, 3.9)
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Analyze (python)
- GitHub Check: Analyze (c-cpp)
🔇 Additional comments (7)
deepmd/tf/fit/ener.py (2)
177-179: Constructor accepts default_fparam for cross-backend compat — LGTM.
919-931: Version bump and serialization fields look consistent — LGTM.deepmd/dpmodel/fitting/general_fitting.py (1)
327-342: Version bump and default_fparam serialization — LGTM.deepmd/pt/infer/deep_eval.py (1)
221-223: Forwarder for has_default_fparam — LGTM.Matches underlying model API and multitask head handling.
deepmd/pt/model/model/make_model.py (1)
528-531: Expose has_default_fparam on the scripted model — LGTM.JIT export placement is appropriate.
deepmd/pt/model/task/fitting.py (2)
210-213: Docstring addition for default_fparam — LGTM.
389-429: Version bump and default_fparam serialization — LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
deepmd/tf/fit/polar.py (2)
696-704: Pop default_fparam during deserialize to ensure cross-backend compatibilityPrevents errors when loading models saved by PT/DPModel with a non-None default_fparam.
@classmethod def deserialize(cls, data: dict, suffix: str): @@ - fitting = cls(**data) + # TF backend does not support default_fparam; ignore for compatibility. + data.pop("default_fparam", None) + fitting = cls(**data)
875-891: AttributeError risk: GlobalPolarFittingSeA references undefined numb_fparam/numb_aparaminput_requirement uses self.numb_fparam/self.numb_aparam but GlobalPolarFittingSeA never sets them; should reference the inner PolarFittingSeA.
@property def input_requirement(self) -> list[DataRequirementItem]: """Return data requirements needed for the model input.""" data_requirement = [] - if self.numb_fparam > 0: + if self.polar_fitting.numb_fparam > 0: data_requirement.append( DataRequirementItem( - "fparam", self.numb_fparam, atomic=False, must=True, high_prec=False + "fparam", self.polar_fitting.numb_fparam, atomic=False, must=True, high_prec=False ) ) - if self.numb_aparam > 0: + if self.polar_fitting.numb_aparam > 0: data_requirement.append( DataRequirementItem( - "aparam", self.numb_aparam, atomic=True, must=True, high_prec=False + "aparam", self.polar_fitting.numb_aparam, atomic=True, must=True, high_prec=False ) ) return data_requirement
♻️ Duplicate comments (2)
deepmd/tf/fit/ener.py (1)
895-901: Avoid deserialization breakage when loading models saved with default_fparamAs-is, TF will raise in init if a serialized payload from another backend contains a non-None default_fparam. Null it out before constructing.
def deserialize(cls, data: dict, suffix: str = ""): @@ - data = data.copy() - check_version_compatibility(data.pop("@version", 1), 4, 1) + data = data.copy() + check_version_compatibility(data.pop("@version", 1), 4, 1) + # TF does not support default_fparam; ignore it to allow cross-backend loading. + if data.get("default_fparam") is not None: + log.warning( + "Ignoring default_fparam in TensorFlow; explicit fparam will be required." + ) + data["default_fparam"] = None fitting = cls(**data)deepmd/dpmodel/fitting/property_fitting.py (1)
93-96: Avoid positional BC break; also tighten the typePlacing
default_fparambeforeseedbreaks positional callers; also uselist[float].- type_map: Optional[list[str]] = None, - default_fparam: Optional[list] = None, - # not used - seed: Optional[int] = None, + type_map: Optional[list[str]] = None, + # not used + seed: Optional[int] = None, + default_fparam: Optional[list[float]] = None,#!/bin/bash # Inspect call sites that might pass seed positionally. rg -nP 'PropertyFittingNet\s*\(' -C3
🧹 Nitpick comments (17)
deepmd/pd/model/task/invar_fitting.py (3)
176-185: Fix typo in docstring (“alculate” → “calculate”).- """Based on embedding net output, alculate total energy. + """Based on embedding net output, calculate total energy.
188-188: Remove stray, unused module-level annotation.This bare annotation appears accidental and may trip linters; the actual exclude_types handling is via init/super.
- exclude_types: list[int] + # (removed stray annotation)
150-151: Centralize schema version bounds and add logging in deserializeAdd module‐level MIN_SCHEMA_VERSION and MAX_SCHEMA_VERSION constants and update
deserializeto pop the version into avervariable, use those constants incheck_version_compatibility, and emit a debug log for easier diagnostics:--- a/deepmd/pd/model/task/invar_fitting.py +++ b/deepmd/pd/model/task/invar_fitting.py @@ log = logging.getLogger(__name__) +# Centralize schema version bounds. +MIN_SCHEMA_VERSION: int = 1 +MAX_SCHEMA_VERSION: int = 4 @@ def deserialize(cls, data: dict) -> "GeneralFitting": data = copy.deepcopy(data) - check_version_compatibility(data.pop("@version", 1), 4, 1) + ver = int(data.pop("@version", 1)) + check_version_compatibility(ver, MAX_SCHEMA_VERSION, MIN_SCHEMA_VERSION) + log.debug("InvarFitting.deserialize: @version=%d", ver) return super().deserialize(data)deepmd/tf/fit/dipole.py (3)
81-84: Docstring claims default_fparam is usable, but code forbids it in TF. Clarify.Add an explicit note that this argument is accepted for forward-compatibility only and is not supported in TensorFlow (raises).
- default_fparam: list[float], optional - The default frame parameter. If set, when `fparam.npy` files are not included in the data system, - this value will be used as the default value for the frame parameter in the fitting net. + default_fparam: list[float], optional + Forward-compatibility only; not supported in TensorFlow (a NotImplementedError/ValueError is raised if provided). + In non-TF backends, when `fparam.npy` files are absent, this value may be used as the default frame parameter.
138-139: Store-only is fine, but add a tiny helper for API parity.Consider adding
has_default_fparam(self) -> boolreturningFalsehere for TF parity with other backends.self.default_fparam = default_fparam + def has_default_fparam(self) -> bool: + return False
145-146: Use NotImplementedError for unsupported-frontend guard; address Ruff TRY003.Switching to NotImplementedError better signals feature unavailability; keep the message short to appease Ruff.
- if default_fparam is not None: - raise ValueError("default_fparam is not supported in TensorFlow.") + if default_fparam is not None: + raise NotImplementedError("default_fparam unsupported in TensorFlow")deepmd/tf/fit/dos.py (2)
104-107: Docstring vs behavior mismatch: TF forbids default_fparam.Clarify the docstring to state TF does not support this param (kept for forward-compat).
- default_fparam: list[float], optional - The default frame parameter. If set, when `fparam.npy` files are not included in the data system, - this value will be used as the default value for the frame parameter in the fitting net. + default_fparam: list[float], optional + Forward-compatibility only; not supported in TensorFlow (raises if provided).
143-147: Prefer NotImplementedError for unsupported-frontend guard; keep message short.Also aligns with Ruff TRY003 guidance.
- self.default_fparam = default_fparam + self.default_fparam = default_fparam if dim_case_embd > 0: raise ValueError("dim_case_embd is not supported in TensorFlow.") - if default_fparam is not None: - raise ValueError("default_fparam is not supported in TensorFlow.") + if default_fparam is not None: + raise NotImplementedError("default_fparam unsupported in TensorFlow")deepmd/tf/fit/ener.py (1)
205-207: Silence Ruff TRY003 or move message to an exception classRuff flags the inline message here. If you prefer to keep ValueError, add a noqa; otherwise introduce a small exception type.
- if self.default_fparam is not None: - raise ValueError("default_fparam is not supported in TensorFlow.") + if self.default_fparam is not None: + # TensorFlow backend intentionally does not support default_fparam. + raise ValueError("default_fparam unsupported in TensorFlow.") # noqa: TRY003deepmd/tf/fit/polar.py (2)
183-191: Fix copy-paste error in exception messages and shorten to satisfy Ruff TRY003Error texts mention “dipole fitting” in polar fitting; also long strings trigger TRY003.
- if numb_fparam > 0: - raise ValueError("numb_fparam is not supported in the dipole fitting") - if numb_aparam > 0: - raise ValueError("numb_aparam is not supported in the dipole fitting") - if dim_case_embd > 0: - raise ValueError("dim_case_embd is not supported in TensorFlow.") + if numb_fparam > 0: + raise ValueError("numb_fparam unsupported in TF polar fitting") + if numb_aparam > 0: + raise ValueError("numb_aparam unsupported in TF polar fitting") + if dim_case_embd > 0: + raise ValueError("dim_case_embd unsupported in TensorFlow")
639-651: Optional: omit always-None default_fparam from TF serializationSince TF never uses default_fparam, consider not emitting it to reduce schema noise. Harmless to keep, but trimming simplifies artifacts.
deepmd/pt/model/task/property.py (1)
95-96: Unify type hints for default_fparamUse list[float] for consistency with other modules.
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,deepmd/pt/model/task/invar_fitting.py (1)
106-106: Avoid mutable default for exclude_typesUsing [] as a default is risky; prefer None and set in init.
- exclude_types: list[int] = [], + exclude_types: Optional[list[int]] = None, @@ - exclude_types=exclude_types, + exclude_types=[] if exclude_types is None else exclude_types,deepmd/pt/model/task/polarizability.py (2)
104-105: Unify type hints and propagation for default_fparamType as list[float] for consistency; propagation looks correct.
- default_fparam: Optional[list] = None, + default_fparam: Optional[list[float]] = None,Also applies to: 146-147
100-100: Avoid mutable default for exclude_typesSame pattern; use None default.
- exclude_types: list[int] = [], + exclude_types: Optional[list[int]] = None, @@ - exclude_types=exclude_types, + exclude_types=[] if exclude_types is None else exclude_types,deepmd/dpmodel/fitting/polarizability_fitting.py (1)
93-96: Clarify constraints for default_fparam in the docstringSpecify length/dtype behavior to avoid misuse.
- default_fparam: list[float], optional - The default frame parameter. If set, when `fparam.npy` files are not included in the data system, - this value will be used as the default value for the frame parameter in the fitting net. + default_fparam: list[float], optional + Default frame parameters used when `fparam.npy` is absent. + Length must equal `numb_fparam`; ignored if `numb_fparam == 0`. + Values are cast to the configured model precision.deepmd/dpmodel/fitting/property_fitting.py (1)
68-71: Tighten docstring for default_fparamDocument required length and behavior when
numb_fparam == 0.- default_fparam: list[float], optional - The default frame parameter. If set, when `fparam.npy` files are not included in the data system, - this value will be used as the default value for the frame parameter in the fitting net. + default_fparam: list[float], optional + Default frame parameters used when `fparam.npy` is absent. + Length must equal `numb_fparam`; ignored if `numb_fparam == 0`. + Values are cast to the configured model precision.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (18)
deepmd/dpmodel/fitting/dipole_fitting.py(4 hunks)deepmd/dpmodel/fitting/dos_fitting.py(2 hunks)deepmd/dpmodel/fitting/ener_fitting.py(2 hunks)deepmd/dpmodel/fitting/invar_fitting.py(4 hunks)deepmd/dpmodel/fitting/polarizability_fitting.py(5 hunks)deepmd/dpmodel/fitting/property_fitting.py(5 hunks)deepmd/pd/model/task/ener.py(1 hunks)deepmd/pd/model/task/invar_fitting.py(1 hunks)deepmd/pt/model/task/dipole.py(4 hunks)deepmd/pt/model/task/dos.py(3 hunks)deepmd/pt/model/task/ener.py(2 hunks)deepmd/pt/model/task/invar_fitting.py(4 hunks)deepmd/pt/model/task/polarizability.py(5 hunks)deepmd/pt/model/task/property.py(4 hunks)deepmd/tf/fit/dipole.py(6 hunks)deepmd/tf/fit/dos.py(6 hunks)deepmd/tf/fit/ener.py(7 hunks)deepmd/tf/fit/polar.py(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- deepmd/dpmodel/fitting/ener_fitting.py
- deepmd/dpmodel/fitting/dos_fitting.py
- deepmd/pt/model/task/ener.py
- deepmd/pt/model/task/dipole.py
- deepmd/dpmodel/fitting/dipole_fitting.py
- deepmd/pt/model/task/dos.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.py: Run ruff check and ruff format on all Python code before committing
Ensure Python code is formatted with ruff’s formatter
Files:
deepmd/pd/model/task/invar_fitting.pydeepmd/pd/model/task/ener.pydeepmd/pt/model/task/property.pydeepmd/dpmodel/fitting/polarizability_fitting.pydeepmd/pt/model/task/invar_fitting.pydeepmd/pt/model/task/polarizability.pydeepmd/dpmodel/fitting/invar_fitting.pydeepmd/tf/fit/dipole.pydeepmd/tf/fit/dos.pydeepmd/tf/fit/polar.pydeepmd/dpmodel/fitting/property_fitting.pydeepmd/tf/fit/ener.py
🧬 Code graph analysis (12)
deepmd/pd/model/task/invar_fitting.py (1)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)
deepmd/pd/model/task/ener.py (1)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)
deepmd/pt/model/task/property.py (2)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
deepmd/dpmodel/fitting/polarizability_fitting.py (2)
source/tests/consistent/fitting/test_ener.py (1)
data(79-98)deepmd/utils/version.py (1)
check_version_compatibility(2-27)
deepmd/pt/model/task/invar_fitting.py (2)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
deepmd/pt/model/task/polarizability.py (2)
source/tests/consistent/fitting/test_ener.py (1)
data(79-98)deepmd/utils/version.py (1)
check_version_compatibility(2-27)
deepmd/dpmodel/fitting/invar_fitting.py (2)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
deepmd/tf/fit/dipole.py (2)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
deepmd/tf/fit/dos.py (3)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)source/tests/consistent/fitting/test_dos.py (1)
data(69-86)
deepmd/tf/fit/polar.py (2)
source/tests/tf/common.py (3)
numb_fparam(906-907)numb_fparam(1107-1108)numb_aparam(909-910)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
deepmd/dpmodel/fitting/property_fitting.py (2)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
deepmd/tf/fit/ener.py (2)
deepmd/utils/version.py (1)
check_version_compatibility(2-27)source/tests/consistent/fitting/test_ener.py (1)
data(79-98)
🪛 Ruff (0.12.2)
deepmd/tf/fit/dipole.py
140-140: Avoid specifying long messages outside the exception class
(TRY003)
142-142: Avoid specifying long messages outside the exception class
(TRY003)
144-144: Avoid specifying long messages outside the exception class
(TRY003)
146-146: Avoid specifying long messages outside the exception class
(TRY003)
deepmd/tf/fit/dos.py
145-145: Avoid specifying long messages outside the exception class
(TRY003)
147-147: Avoid specifying long messages outside the exception class
(TRY003)
deepmd/tf/fit/polar.py
184-184: Avoid specifying long messages outside the exception class
(TRY003)
186-186: Avoid specifying long messages outside the exception class
(TRY003)
188-188: Avoid specifying long messages outside the exception class
(TRY003)
190-190: Avoid specifying long messages outside the exception class
(TRY003)
deepmd/tf/fit/ener.py
207-207: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Test Python (1, 3.9)
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Analyze (python)
- GitHub Check: Test C++ (false)
- GitHub Check: Test C++ (true)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
🔇 Additional comments (24)
deepmd/pd/model/task/ener.py (1)
75-78: Bump version gates with constant and optional pops
- Define at module top:
# Energy fitting serde compatibility window SERDE_VERSION_MAX = 4- Replace literals in deserialize:
- check_version_compatibility(data.pop("@version", 1), 4, 1) - data.pop("var_name") - data.pop("dim_out") + check_version_compatibility(data.pop("@version", 1), SERDE_VERSION_MAX, 1) + data.pop("var_name", None) + data.pop("dim_out", None)- Confirm serializers emit
"@version": SERDE_VERSION_MAX.- Verify other fitting modules use matching version gates for consistency.
deepmd/tf/fit/dipole.py (3)
421-421: Version bump acknowledged.Ensure downstream readers expect @Version==4 for dipole TF fitting.
432-432: Serializing default_fparam is OK even if None.Keeps schema consistent across backends; no action needed.
479-479: Version-compat check looks correct.Accepts legacy payloads and caps at 4.
deepmd/tf/fit/dos.py (4)
131-131: Signature addition is safe here.Placed before
**kwargs; unlikely to break positional calls.
688-689: Compat window updated to 4; good.Legacy models still pass with minimal 1.
715-716: Version bump to 4 is consistent.Matches deserialize cap.
726-726: Including default_fparam in serialization is fine.Schema alignment across backends; no-op for TF since it raises on use.
deepmd/tf/fit/ener.py (2)
180-181: Confirm no callers set default_fparam—remains None and TF path safe.
922-934: TF EnerFitting serialization is correctdefault_fparam is always None (enforced in init) and the version bump to 4 aligns with other TF fitting serializers.
deepmd/pt/model/task/property.py (1)
156-157: Version bump looks consistentSerialize/deserialize updated to 5; aligns with added schema fields upstream. No further action.
deepmd/pt/model/task/invar_fitting.py (3)
84-87: Doc addition for default_fparam is clearDocs align with broader PR introduction. Good.
110-111: Validate default_fparam length against numb_fparamEnsure GeneralFitting enforces len(default_fparam) == numb_fparam when provided; otherwise raise early here.
Would you like me to add a guard here or confirm it’s handled in GeneralFitting?
Also applies to: 136-137
154-155: Deserialize version check to 4 is OKAssuming GeneralFitting carries the schema bump for default_fparam; no subclass-specific fields were added here.
deepmd/pt/model/task/polarizability.py (2)
79-82: Doc addition for default_fparam is clearMatches the new API surface.
203-204: Version bump to 5 and matching check look goodAligns with the added fields in serialization.
Also applies to: 214-215
deepmd/dpmodel/fitting/invar_fitting.py (3)
113-116: Doc addition for default_fparam is accurateGood alignment with DPModel’s role.
144-145: Forwarding default_fparam to base is correctNo issues spotted.
Also applies to: 180-181
193-194: Version compatibility confirmed
GeneralFitting.serialize sets “@Version”: 4 (deepmd/dpmodel/fitting/general_fitting.py:331), matching the max version in InvarFitting.deserialize (deepmd/dpmodel/fitting/invar_fitting.py:193). No changes needed.deepmd/dpmodel/fitting/polarizability_fitting.py (3)
123-124: Good: parameter placement preserves positional BCAdding
default_fparamafterseedavoids breaking callers; type is precise.
171-172: Correctly threaded to base classForwarding
default_fparamtoGeneralFittinglooks right.
208-208: LGTM: backward-compatible deserializationUsing
data.pop("@version", 1)keeps old unversioned artifacts loadable.deepmd/dpmodel/fitting/property_fitting.py (2)
117-118: Good: parameter forwarded to base
default_fparamis correctly passed toInvarFitting.
157-159: Confirm protocol-5 support and version consistency
Theproperty_fittingserializer now emits@version = 5, yet all other fitting modules (general_fitting, pd/task/fitting, tf/fit, etc.) still use version 4. Verify that your loaders accept version 5 for property fitting, and decide whether to bump the other serializers or keep them at 4.
Summary by CodeRabbit
New Features
Behavior Changes
Documentation
Tests