You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I work on instance segmentation. However, I find it hard to make evaluation by your code. It works well on coco-liking evaluation.
Firstly, how could I get gtFine_instanceids.png in 658 line on evaluate_instance_segmentation.py? createLabels.py is only able to make gtFine_labelids.png.
Secondly, what's the args.gtInstancesFile in 73 line on evaluate_instance_segmentaion.py?
Thanks very much.
The text was updated successfully, but these errors were encountered:
However, I find it that if I evaluate the ground truth of validation dataset by the code. I only gain the 0.45 map(0.5-0.95), and I am sure there is no problem for my code.
I am unable to generate text files of the desired format for the predictions. I have also generated COCO-like predictions in JSON file format. Could you shed light on how you managed to do this?
Hi, I work on instance segmentation. However, I find it hard to make evaluation by your code. It works well on coco-liking evaluation.
Firstly, how could I get gtFine_instanceids.png in 658 line on evaluate_instance_segmentation.py? createLabels.py is only able to make gtFine_labelids.png.
Secondly, what's the args.gtInstancesFile in 73 line on evaluate_instance_segmentaion.py?
Thanks very much.
The text was updated successfully, but these errors were encountered: