Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Support ONNX conversion for models #97

Open
ogencoglu opened this issue Jan 8, 2025 · 2 comments · May be fixed by #179
Open

Feature: Support ONNX conversion for models #97

ogencoglu opened this issue Jan 8, 2025 · 2 comments · May be fixed by #179
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@ogencoglu
Copy link

Do you support ONNX conversion?

@LennartPurucker LennartPurucker added the help wanted Extra attention is needed label Jan 9, 2025
@LennartPurucker
Copy link
Collaborator

Currently, the code base does not have code to support ONNX conversion.

We are looking into supporting this and have tried some ideas. However, due to the new architecture, we still need to finalize this.

@AlexanderPfefferle do you have more information about this?

@LennartPurucker LennartPurucker changed the title ONNX models Feature: Support ONNX conversion for models Jan 9, 2025
@LennartPurucker LennartPurucker added the enhancement New feature or request label Jan 9, 2025
@AlexanderPfefferle
Copy link
Contributor

#165 enabled ONNX export
The tests contain an example for how to export:

class ModelWrapper(nn.Module):
def __init__(self, original_model): # noqa: D107
super().__init__()
self.model = original_model
def forward(
self,
X,
y,
single_eval_pos,
only_return_standard_out,
categorical_inds,
):
return self.model(
None,
X,
y,
single_eval_pos=single_eval_pos,
only_return_standard_out=only_return_standard_out,
categorical_inds=categorical_inds,
)
@pytest.mark.filterwarnings("ignore::torch.jit.TracerWarning")
def test_onnx_exportable_cpu(X_y: tuple[np.ndarray, np.ndarray]) -> None:
if os.name == "nt":
pytest.skip("onnx export is not tested on windows")
X, y = X_y
with torch.no_grad():
classifier = TabPFNClassifier(n_estimators=1, device="cpu", random_state=42)
# load the model so we can access it via classifier.model_
classifier.fit(X, y)
# this is necessary if cuda is available
classifier.predict(X)
# replicate the above call with random tensors of same shape
X = torch.randn(
(X.shape[0] * 2, 1, X.shape[1] + 1),
generator=torch.Generator().manual_seed(42),
)
y = (
torch.rand(y.shape, generator=torch.Generator().manual_seed(42))
.round()
.to(torch.float32)
)
dynamic_axes = {
"X": {0: "num_datapoints", 1: "batch_size", 2: "num_features"},
"y": {0: "num_labels"},
}
torch.onnx.export(
ModelWrapper(classifier.model_).eval(),
(X, y, y.shape[0], True, []),
io.BytesIO(),
input_names=[
"X",
"y",
"single_eval_pos",
"only_return_standard_out",
"categorical_inds",
],
output_names=["output"],
opset_version=17, # using 17 since we use torch>=2.1
dynamic_axes=dynamic_axes,
)

Currently, you can only use the exported model with the exact same number of features that the example input for exporting had, but @LeoGrin is looking into that.

@LeoGrin LeoGrin linked a pull request Feb 19, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants