Large-scale distributed systems have dense, complex code-bases that are assumed to perform multiple and inter-dependent tasks while user interaction is present. The way users interact with systems can differ and evolve over time, as can the systems themselves. Consequently, anomaly detection (AD) sensors must be able to cope with updates to their operating environment. Otherwise, the sensor may incorrectly classify new patterns as malicious (a false positive) or assert that old or outdated patterns are normal (a false negative). This problem of “model drift” is an almost universally acknowledged hazard for anomaly sensors. However, relatively little work has been done to understand the process of identifying and seamlessly updating an operational network AD sensor with legal modifications like changes to a file system or back-end database. In this paper, we highlight some of the challenges of keeping an anomaly sensor updated, an important step toward helping anomaly sensors bec...
Angelos Stavrou, Gabriela F. Cretu-Ciocarlie, Mich