You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is signified by the output Gradient norm being constant (stuck, not changing) across iterations when using NewtonTrustRegion. As an example, below is the output from the first two iterations of a minimization problem run using NewtonTrustRegion. My initial point is the output of a prior round of optimization, that also got stuck at this same Function value and Gradient norm.
Iter Function value Gradient norm
0 4.925179e+06 1.951972e+03
* time: 0.00021505355834960938
1 4.925179e+06 1.951972e+03
* time: 0.0005660057067871094
Some more detail: If I switch to LBFGS the optimization successfully continues (Function value decreases), but of course gradient methods are slow, so it would be ideal to switch back to NewtonTrustRegion. Even if I let LBFGS run for a while so as to find a moderately different candidate minimizer, when I switch back to NewtonTrustRegion this same behavior of being stuck with constant Gradient norm re-emerges.
I would provide code but it is complicated and has a lot of data and it only occurs in some of the models I've run -- I'm really just hoping to get some intuition as to options/tuning parameters to adjust to bounce out of difficult spots. I have already tried allow_f_increases=true; that did not solve the issue.
Excellent package, many thanks.
The text was updated successfully, but these errors were encountered:
What is signified by the output
Gradient norm
being constant (stuck, not changing) across iterations when usingNewtonTrustRegion
. As an example, below is the output from the first two iterations of a minimization problem run usingNewtonTrustRegion
. My initial point is the output of a prior round of optimization, that also got stuck at this sameFunction value
andGradient norm
.Some more detail: If I switch to
LBFGS
the optimization successfully continues (Function value
decreases), but of course gradient methods are slow, so it would be ideal to switch back toNewtonTrustRegion
. Even if I letLBFGS
run for a while so as to find a moderately different candidate minimizer, when I switch back toNewtonTrustRegion
this same behavior of being stuck with constantGradient norm
re-emerges.I would provide code but it is complicated and has a lot of data and it only occurs in some of the models I've run -- I'm really just hoping to get some intuition as to options/tuning parameters to adjust to bounce out of difficult spots. I have already tried
allow_f_increases=true
; that did not solve the issue.Excellent package, many thanks.
The text was updated successfully, but these errors were encountered: