-
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Long latency using Yolo tflite model #96
Comments
Guten Tag, Hans here! 🍻 Thanks for reporting your issue. It sounds like you have a few challenges with your Yolo model's latency and low confidence scores. However, I notice that you haven't included any logs that would help mrousavy diagnose the problem further. For troubleshooting, please provide logs from your React Native app. You can gather logs on Android using Keep us updated, and we look forward to your logs!
|
You can try to export the model with quantization and changing the datatype to uint8, but I don't think this library works with quantised model |
@francesco-clementi-92 Quantization didn't work fro yolo model. The best I could do was to decrease image size down to 320*256 or something similar but inference's still around 57 ms per frame, which isn't enough fast for real time analysis. My phone is not top of the market as well (Xiaomi Poco X5 Pro) so I guess with a better phone, this could go faster. I solved my low inference score problem. It was due to the rotation of the image. By default, the image of the camera is rorated 90° and my model can only see objects in a specific rotation angle. |
@manu13008 what do you mean for "by default, the image of the camera is rotated 90 degree"? I was going to implement this package with the resize plugin, but the resize is not going to work async so I stopped. Be aware that with quantised model usually you have to dequantized the input by multiply it with the quantization scale. |
Hi all,
I exported a generic (and trained) Yolov8n model into tflite format and loaded in a react native app (no expo).
After I have understood the output format, I have been trying to execute a real time inference but I have been facing 2 issues :
I have a very long latency despite the fact my model weight is only 6mo. The inference time is about 200ms (which is long but explained by the image size of 640 I guess) but what is the weirdest part is that the camera is freezing during much more than that time. For comparison, I have been using also the efficientDet model from the example and it worked fine in real time with very low latency. I actually have no idea what could cause that issue.
Sorry if this is not completely related to this repo but it might be. My confidence score from the outputs are very very low (0.0000123) and consequently not exploitable. I suspect a wrong input frame during the inference frame which could explain this low score as i'm pretty confident about what I record with my camera. Any insights about what I could possibly do wrong in that case ?
Here is the code :
My return jsx :
Thanks for the help!
The text was updated successfully, but these errors were encountered: