-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to mitigate the impact of abnormal points in historical data on training? #3
Comments
Well, in our experiments, there are abnormal points in historical data. BTW, as for the influence of labels, you can refer to Fig.7 in Donut paper. (Unsupervised Anomaly Detection via Variational Auto-Encoder for Seasonal KPIs in Web Applications) |
Hi Zeyan, thank you for your answer! Therefore, our training dataset is not very large, and sometimes there exits a series of abnormal points in historical data which has affected the model performance . BTW, in last question, I said that 'I have tried to introduce the labels of the data into the model training, but it has not improved much.' |
Hi, Zeyan Li. I have a question when applying the model on my datasets:
Bagel and donut assumes that the historical data follow normal pattern,
however, when the amount of historical data is not very large,
the impact of the abnormal points can not be ignored. So I want to ask that:
how to mitigate the impact of abnormal points in historical data on training?
I have tried to introduce the labels of the data into the model training, but it has not improved much.
I will be appreciated if you can help me to solve this problem, thank you.
The text was updated successfully, but these errors were encountered: