You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Won't these calls accumulate the gradients during the call of optimizer.step(); I thought our objective here was to simply compute the gardient balancing terms and multiply those to the loss or could you please give overview of what's going on here inside gradient balancing incase I misunderstood something?
The text was updated successfully, but these errors were encountered:
I just looked at the code, I think you're right and there should be self.netG.zero_grad() between the first and second backprop. The third one is performed without gradient accumulation just so that the graph won't be retained.
There are three backward calls inside gardient balancing between generator loss & OCR loss:
convolutional-handwriting-gan/models/ScrabbleGAN_baseModel.py
Line 354 in f7daa50
convolutional-handwriting-gan/models/ScrabbleGAN_baseModel.py
Line 368 in f7daa50
convolutional-handwriting-gan/models/ScrabbleGAN_baseModel.py
Line 374 in f7daa50
Won't these calls accumulate the gradients during the call of optimizer.step(); I thought our objective here was to simply compute the gardient balancing terms and multiply those to the loss or could you please give overview of what's going on here inside gradient balancing incase I misunderstood something?
The text was updated successfully, but these errors were encountered: