You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NLF-S has a batched throughput of 410 fps and unbatched throughput of 79 fps on an Nvidia RTX 3090 GPU. For NLF-L these are 109 fps and 41 fps respectively.
What inference setup was used to achieve these metrics? On a 3090, using the demo code, my unbatched inference throughput is significantly lower.
The text was updated successfully, but these errors were encountered:
Hi, thanks for the interest. This measures the speed of the core model that takes image crops. The demo uses the more convenient wrapper model that takes care of homography reprojection, rebatching the crops, back-transforming, possibly running the person detector etc. I'll prepare this example code later.
From the paper:
What inference setup was used to achieve these metrics? On a 3090, using the demo code, my unbatched inference throughput is significantly lower.
The text was updated successfully, but these errors were encountered: