-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance gap with pytorch 0.4.1: mIoU 79.99 #67
Comments
Hi, @Wangzhuoying0716 would you mind sharing your training settings, I don't know where I'm doing something wrong. my training settings: |
@Hussainflr If there is no error message but just gets stuck, I guess it's because the memory is too full to keep on? You can check the memory usage when it is stuck to see whether it's almost full. |
@Hussainflr I also used |
@Wangzhuoying0716 |
Hi @sunke123, LOSS.CLASS_BALANCE defined in lib/config/default.py, but is there any place to use it? |
@whiteinblue |
@sunke123 thank you for your reply, and i confuse, it's where that the param LOSS.CLASS_BALANCE used ? |
Hello, i met same problem, have you solved it? |
@whiteinblue Class_balance is used in the loss function. |
So where that the param LOSS.CLASS_BALANCE used? I read the code, it seems that no matter what is CLASS_BALANCE param, the balance loss is used in the cityscape dataset. |
Sorry to bother you! I tried to implement the training process using the seg_hrnet_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml. But it came out that the validation mIoU for single-scale and no-flip is 79.99, almost 1% lower than your result shown(80.9). I wonder if this is normal variation or I did something wrong ?
Here is the training log:
https://pan.baidu.com/s/1utKUVuBEjDBtfgOk7A-5sQ 【passward: jckv】
Thank you!
The text was updated successfully, but these errors were encountered: