feat/ Optimizers and LR scheduler for segmentation module#95
feat/ Optimizers and LR scheduler for segmentation module#95MicheleCattaneo wants to merge 1 commit intometeofrance:mainfrom
Conversation
|
Good idea @MicheleCattaneo, I think some "simple" things can already be configured thru the cli and the yaml settings file : Lightning-AI/pytorch-lightning#18210 and Have you tried that ? In other words, couldn't we rely on already existing features of the cli to achieve what you want ? and maybe this requires removing some code from our lightningmodule. |
Hi, Personally, I feel like sometimes Lightning does too much "magic" and wish that stuff was more explicitly defined, but that's just my personal taste. |
|
@MicheleCattaneo I think it is time to sit down and think about our lightningmodule(s) a bit more also including my colleagues. I'll organize a virtual meeting during the summer or worse case scenario in september. |
|
@MicheleCattaneo if you do the same as we advise in the README.md isn't that enough ? Following discussion with colleagues we think a full project as an example/illustration plus a cookie-cutter template would be ideal. Your thoughts ? |
|
Hi! I gave a look and yes, using the CLI, Lightning can inject optimizers and some learning rate schedulers. However I could not manage to use Something else that I noticed it that It seems to me that at the cost of having a more complicated definition of By setting default arguments to the module, we can then maintain the possibility of a very simple module instantiation (i.e the use of the common Adam and no lr scheduler). What do you think? Do you see other downsides? Could you also clarify on your comment about the full project and cookie-cutter template? What do you suggest? |
Addition of Optimizers and LR Schedulers to the Segmentation Module
Description
This PR introduces some more flexibility in the choice of optimizer and learning rate for the
SegmentationLightningModule.The constructor additionally takes:
torch.optim) and a dictionary of kwargstorch.optim.lr_scheduler) and a dictionary of kwargsstep()function call (for certain schedulers only). For example,ReduceLROnPlateau.step()needs to be conditioned on something like the validation loss.I implemented this following the Lightning documentation
The code is not tested yet, I just wanted to give an idea of the changes that would be introduced first and see whether there is interest in this feature.
Looking forward to hear your opinion!