This repository has been archived by the owner on Dec 8, 2021. It is now read-only.
Apply two methods of latency measurements in the inference #13
Labels
enhancement
New feature or request
The first measurement method is like the benchmark_app to jst run through networks without any images. The second method is to load real images and run through the networks with them, i.e. load, infer, load, infer.
It shall be applicable to as the measurement method should be set as an option, either to measure both ways or only the benchmark_app way: tf2oda_inference_from_saved_model.py
The text was updated successfully, but these errors were encountered: