You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unlike the current implementation of cv in nyaggle, The models trained in lgb.cv and xgb.cv have an equal number of trees in all folds.
Since these “balanced” models may work better when the number of data is small, we sometimes want to extract the trained models from lgb.cv or xgb.cv and use them for test data.
So it would be useful to have the option to use these cv functions in nyaggle's run_experiment and cross_validate as well.
Unlike the current implementation of cv in nyaggle, The models trained in lgb.cv and xgb.cv have an equal number of trees in all folds.
Since these “balanced” models may work better when the number of data is small, we sometimes want to extract the trained models from lgb.cv or xgb.cv and use them for test data.
So it would be useful to have the option to use these cv functions in nyaggle's run_experiment and cross_validate as well.
ref:
https://blog.amedama.jp/entry/lightgbm-cv-model
https://blog.amedama.jp/entry/xgboost-cv-model
The text was updated successfully, but these errors were encountered: