Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference Speed #22

Open
FrostyFridge opened this issue Mar 20, 2025 · 2 comments
Open

Inference Speed #22

FrostyFridge opened this issue Mar 20, 2025 · 2 comments

Comments

@FrostyFridge
Copy link

From the paper:

NLF-S has a batched throughput of 410 fps and unbatched throughput of 79 fps on an Nvidia RTX 3090 GPU. For NLF-L these are 109 fps and 41 fps respectively.

What inference setup was used to achieve these metrics? On a 3090, using the demo code, my unbatched inference throughput is significantly lower.

@isarandi
Copy link
Owner

Hi, thanks for the interest. This measures the speed of the core model that takes image crops. The demo uses the more convenient wrapper model that takes care of homography reprojection, rebatching the crops, back-transforming, possibly running the person detector etc. I'll prepare this example code later.

@FrostyFridge
Copy link
Author

I see, thanks! I look forward to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants