You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my current setup preparing my pandas dataset on Cerebro start takes about ~40% of the whole execution time (in addition to actually loading the data from a database). This rather high portion is due to two points: The data is modified row-wise instead of column-wise and this process is executed once for every strategy optimization run.
Giving the static nature of data during backtesting the dataframe can be prepared in a more efficient manner and cached between strategy optimization runs.
The text was updated successfully, but these errors were encountered:
In my current setup preparing my pandas dataset on Cerebro start takes about ~40% of the whole execution time (in addition to actually loading the data from a database). This rather high portion is due to two points: The data is modified row-wise instead of column-wise and this process is executed once for every strategy optimization run.
Giving the static nature of data during backtesting the dataframe can be prepared in a more efficient manner and cached between strategy optimization runs.
The text was updated successfully, but these errors were encountered: