Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation Code for Instance Segmentation #3

Open
qqlu opened this issue Aug 8, 2018 · 2 comments
Open

Evaluation Code for Instance Segmentation #3

qqlu opened this issue Aug 8, 2018 · 2 comments

Comments

@qqlu
Copy link

qqlu commented Aug 8, 2018

Hi, I work on instance segmentation. However, I find it hard to make evaluation by your code. It works well on coco-liking evaluation.

Firstly, how could I get gtFine_instanceids.png in 658 line on evaluate_instance_segmentation.py? createLabels.py is only able to make gtFine_labelids.png.

Secondly, what's the args.gtInstancesFile in 73 line on evaluate_instance_segmentaion.py?

Thanks very much.

@qqlu
Copy link
Author

qqlu commented Aug 8, 2018

I have solved those two problems.

However, I find it that if I evaluate the ground truth of validation dataset by the code. I only gain the 0.45 map(0.5-0.95), and I am sure there is no problem for my code.

@anirudh-chakravarthy
Copy link

Hi,

I am unable to generate text files of the desired format for the predictions. I have also generated COCO-like predictions in JSON file format. Could you shed light on how you managed to do this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants