You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is very cool work and the documentation is great!
Two questions:
I have already checked out @patrick-kidger's answer to Issue Latent SDE for irregularly spaced time series? #106 on irregular time series for latent SDE models. Is it also a good idea to include missingness channels and fill-forward shorter samples as in torchcde/irregular_data.py when the encoder is a GRU and not a neural CDE? I would expect that the inclusion of these extra channels would affect the initialization of the encoder so that the appropriate input_size is specified. However, I'm a little confused about how this would change the contextualize() and drift (f()) functions, since each sample would now have its own unique set of timestamps.
In latent_sde_lorenz.py, I believe the sample() function only samples from the complete learned prior distribution. In order to reconstruct data by conditionally sampling the prior given a 'warm up' time series, would I need to write a different sample() function? I'm guessing I would just need to include context before sampling z0 (as in lines 173–177 in latent_sde_lorenz.py), with 'drift' set to 'f' instead of 'h', before sampling it as usual in the sample() function. Is this accurate?
Any tips would be appreciated. Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
This is very cool work and the documentation is great!
Two questions:
I have already checked out @patrick-kidger's answer to Issue Latent SDE for irregularly spaced time series? #106 on irregular time series for latent SDE models. Is it also a good idea to include missingness channels and fill-forward shorter samples as in torchcde/irregular_data.py when the encoder is a GRU and not a neural CDE? I would expect that the inclusion of these extra channels would affect the initialization of the encoder so that the appropriate input_size is specified. However, I'm a little confused about how this would change the contextualize() and drift (f()) functions, since each sample would now have its own unique set of timestamps.
In latent_sde_lorenz.py, I believe the sample() function only samples from the complete learned prior distribution. In order to reconstruct data by conditionally sampling the prior given a 'warm up' time series, would I need to write a different sample() function? I'm guessing I would just need to include context before sampling z0 (as in lines 173–177 in latent_sde_lorenz.py), with 'drift' set to 'f' instead of 'h', before sampling it as usual in the sample() function. Is this accurate?
Any tips would be appreciated. Thanks!
The text was updated successfully, but these errors were encountered: