You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
lr_finder() now by default divides the range between start_lr and end_lr into log-spaced intervals, following the fast.ai implementation. Cf. Sylvain Gugger's post: https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html. The previous behavior can be achieved passing log_spaced_intervals=FALSE to the function. (#82, @skeydan)
plot.lr_records() now in addition plots an exponentially weighted moving average of the loss (again, see Sylvain Gugger's post), with a weighting coefficient of 0.9 (which seems a reasonable value for the default setting of 100 learning-rate-incrementing intervals). (#82, @skeydan)
Documentation
Many wording improvements in the getting started guides (#81#94, @jonthegeek).
New features
Added MixUp callback and helper loss function and functional logic. (#82, @skeydan).
Added a luz_callback_gradient_clip inspired by FastAI's implementation. (#90)
Added a backward argument to setup allowing one to customize how backward is called for the loss scalar value. (#93)
Added the luz_callback_keep_best_model() to reload the weights from the best model after training is finished. (#95)