Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2 pixels off (vertically) #92

Open
marcok opened this issue Jan 23, 2020 · 2 comments
Open

2 pixels off (vertically) #92

marcok opened this issue Jan 23, 2020 · 2 comments

Comments

@marcok
Copy link

marcok commented Jan 23, 2020

First thanks a lot for your  work, I could archieve incredible results with it!

During manual validation in a photo editing tool, I had the strange impression, that on the lower 50% part of the prediction, the object had a 2 pixel border (only in vertical direction)
I quickly hacked the valuation function to verify my impression with this code where i basicly just remove first two lines of the image:

            pred[:, :, 0:958, :] = pred[:, :, 2:, :]
            pred[:, :, 958:960, :] = pred[:, :, 0:2, :]
           
            confusion_matrix = get_confusion_matrix(
                label,
                pred,
                size,
                config.DATASET.NUM_CLASSES,
                config.TRAIN.IGNORE_LABEL)

And indeed, the miu increased dramatically.
before hack: MeanIU:  0.9868, Pixel_Acc:  0.9934,             Mean_Acc:  0.9938
after hack: MeanIU:  0.9900, Pixel_Acc:  0.9950,             Mean_Acc:  0.9954 

Obvously it makes not much sense to train the model with this hacked val function.
To verifiy its not because of my changes i made to code, i re-trained the model with the standard seg_hrnet_ocr_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml and got the same problem. Im using only two classes (background and foreground). 
My last assumption is, that images must be x==y or 2x=y. all the examples have a quadrat or two times x ratio. Im using another ratio for training:
TRAIN:
  IMAGE_SIZE:
  - 920
  - 690
  BASE_SIZE: 1280

I used cityscape loader as a template and adjusted it to my needs, but in my last test, the only change was the aspect ratio. I trained 20k images over ~50 epochs (i stoped because I wanna fix this issue first), Val set is 2k images. Same problem with the 2pixel in Test Set of 200 images.
Maybe thats causing the problem? Any other ideas?

@marcok
Copy link
Author

marcok commented Jan 23, 2020

The problem was indeed connected to wrong ratio

@PkuRainBow
Copy link
Collaborator

@marcok Please provide more details about your modifications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants