You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the implementation code, the transform part uses the self-attention structure. In this case, the history steps must be the same as the prediction steps. So, if I want to use the data with 12 history steps to predict only one step, there might be something wrong with it.
However, since I have not read the original text, I am not sure whether this detail conforms to the method in the original paper.
The text was updated successfully, but these errors were encountered:
In the implementation code, the transform part uses the self-attention structure. In this case, the history steps must be the same as the prediction steps. So, if I want to use the data with 12 history steps to predict only one step, there might be something wrong with it.
However, since I have not read the original text, I am not sure whether this detail conforms to the method in the original paper.
The text was updated successfully, but these errors were encountered: