We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I retrained the model with 16 batches. Then I converted to ONNX.
ONNX Sample
ort_session = ort.InferenceSession("yunet_16.onnx", providers=['CUDAExecutionProvider']) input_name = ort_session.get_inputs()[0].name for i_path in tqdm(os.listdir(images_path)): for i in os.listdir(os.path.join(images_path, i_path)): if i.lower().endswith(('.png', '.jpg', '.jpeg', '.tiff', '.bmp', '.gif')): img = cv.imread(os.path.join(images_path, i_path, i)) image = cv.resize(img, (128, 128), interpolation=cv.INTER_LINEAR) image = np.transpose(image, [2, 0, 1]) org_images.append(img) img_list.append(image) if len(img_list) >= 16: input_data = np.array(img_list, dtype=np.float32) loc, conf, iou = ort_session.run(None, {input_name: input_data})
Main problem ı can not split the loc, conf, iou values for each image.
example output shape for 1 batch(16) loc (15040, 14) conf (15040, 2) iou (15040, 2) expected shape for 1 batch(16) loc (16, 15040, 14) conf (16, 15040, 2) iou (16, 15040, 2)
Is someting wrong here? Or model only use 1 batch. I don't get it.
The text was updated successfully, but these errors were encountered:
Hello, @hopeux the current project does not support batch inference.
Sorry, something went wrong.
No branches or pull requests
I retrained the model with 16 batches. Then I converted to ONNX.
ONNX Sample
Main problem ı can not split the loc, conf, iou values for each image.
Is someting wrong here? Or model only use 1 batch. I don't get it.
The text was updated successfully, but these errors were encountered: