-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to tranform module into onnx format? #21
Comments
So I |
I successfully transformed model into onnx format, but its flash and SRAM both beyond my board. Is this because of my lack of Tinyengine? Or I didn`t compress the model in a proper way? |
Hi, thanks for reaching out. Is the converted onnx file quantized to int8? Quantization will significantly reduce memory usage. The tflite file should be quantized and maybe you can try and see if it works. |
Hi, the memory usage is dependent on the system stack. We used TinyEngine in our experiments, which will have a different memory usage compared to Cube AI, so it should be normal if the peak memory does not align. The 320KB model should fit the device with TinyEngine, but may not for Cube AI. |
Hi, that definitely makes sense. Thanks for your response. And then we |
Hi, I recognized that we have ckpt and json files there and I
m trying to transform the module into onnx format but I can
t find the corresponding neural network file here. So I`m wondering how could I implement this.The text was updated successfully, but these errors were encountered: