Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precision Issue with Linear Reference #1024

Open
tkreiman opened this issue Feb 21, 2025 · 0 comments
Open

Precision Issue with Linear Reference #1024

tkreiman opened this issue Feb 21, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@tkreiman
Copy link

Python version

3.12.8

fairchem-core version

1.4.1.dev13+g2e49a4ef

pytorch version

2.4.0

cuda version

12.1

Operating system version

No response

Minimal example

from fairchem.core.datasets import LmdbDataset
import torch
from torch_geometric.data import Batch
d = LmdbDataset(cfg)
linear_reference = torch.load(path_to_lr)['energy']

obj = d[100]
b = Batch.from_data_list([obj])
y = b.y

dereferenced = linear_reference.dereference(y.unsqueeze(0), b)
dereferenced_double = linear_reference.dereference(y.unsqueeze(0).double(), b)

print("Float")
print(y, linear_reference(dereferenced, b))
print("Diff meV:", (y - linear_reference(dereferenced, b)).abs() * 1000)
print("Double")
print(y, linear_reference(dereferenced_double, b))
print("Diff meV:", (y - linear_reference(dereferenced_double, b)).abs() * 1000)

Current behavior

Differences up to 20 meV for the true and linear referenced energy when using float. Difference goes away with double precision.

Expected Behavior

Much smaller difference (if not 0) even when using float.

Image

Relevant files to reproduce this bug

No response

@tkreiman tkreiman added the bug Something isn't working label Feb 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant