Details
Original language | English |
---|---|
Title of host publication | Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024 |
Subtitle of host publication | Proceedings of the IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI |
Pages | 222-233 |
Number of pages | 12 |
ISBN (electronic) | 9798400705915 |
Publication status | Published - 11 Jun 2024 |
Event | CAIN 2024: 3rd International Conference on AI Engineering – Software Engineering for AI - Lisbon, Portugal Duration: 14 Apr 2024 → 15 Apr 2024 |
Abstract
Anomaly detection techniques are essential in automating the monitoring of IT systems and operations. These techniques imply that machine learning algorithms are trained on operational data corresponding to a specific period of time and that they are continuously evaluated on newly emerging data. Operational data is constantly changing over time, which affects the performance of deployed anomaly detection models. Therefore, continuous model maintenance is required to preserve the performance of anomaly detectors over time. In this work, we analyze two different anomaly detection model maintenance techniques in terms of the model update frequency, namely blind model retraining and informed model retraining. We further investigate the effects of updating the model by retraining it on all the available data (full-history approach) and only the newest data (sliding window approach). Moreover, we investigate whether a data change monitoring tool is capable of determining when the anomaly detection model needs to be updated through retraining.
Keywords
- AIOps, anomaly detection, concept drift detection, model maintenance, model monitoring
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Artificial Intelligence
- Engineering(all)
- Safety, Risk, Reliability and Quality
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024: Proceedings of the IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI. 2024. p. 222-233.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Is Your Anomaly Detector Ready for Change?
T2 - CAIN 2024
AU - Poenaru-Olaru, Lorena
AU - Karpova, Natalia
AU - Miranda da Cruz, Luis
AU - Rellermeyer, Jan S.
AU - Deursen, Arie van
N1 - Publisher Copyright: © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/6/11
Y1 - 2024/6/11
N2 - Anomaly detection techniques are essential in automating the monitoring of IT systems and operations. These techniques imply that machine learning algorithms are trained on operational data corresponding to a specific period of time and that they are continuously evaluated on newly emerging data. Operational data is constantly changing over time, which affects the performance of deployed anomaly detection models. Therefore, continuous model maintenance is required to preserve the performance of anomaly detectors over time. In this work, we analyze two different anomaly detection model maintenance techniques in terms of the model update frequency, namely blind model retraining and informed model retraining. We further investigate the effects of updating the model by retraining it on all the available data (full-history approach) and only the newest data (sliding window approach). Moreover, we investigate whether a data change monitoring tool is capable of determining when the anomaly detection model needs to be updated through retraining.
AB - Anomaly detection techniques are essential in automating the monitoring of IT systems and operations. These techniques imply that machine learning algorithms are trained on operational data corresponding to a specific period of time and that they are continuously evaluated on newly emerging data. Operational data is constantly changing over time, which affects the performance of deployed anomaly detection models. Therefore, continuous model maintenance is required to preserve the performance of anomaly detectors over time. In this work, we analyze two different anomaly detection model maintenance techniques in terms of the model update frequency, namely blind model retraining and informed model retraining. We further investigate the effects of updating the model by retraining it on all the available data (full-history approach) and only the newest data (sliding window approach). Moreover, we investigate whether a data change monitoring tool is capable of determining when the anomaly detection model needs to be updated through retraining.
KW - AIOps
KW - anomaly detection
KW - concept drift detection
KW - model maintenance
KW - model monitoring
UR - http://www.scopus.com/inward/record.url?scp=85196547715&partnerID=8YFLogxK
U2 - 10.1145/3644815.3644961
DO - 10.1145/3644815.3644961
M3 - Conference contribution
SP - 222
EP - 233
BT - Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024
Y2 - 14 April 2024 through 15 April 2024
ER -