The research goal of my master thesis (I’ve done in cooperation with trivago) was to find real-time capable solutions to automatically detect anomalies in time series data streams, which are especially useful to monitor servers. I evaluated several algorithms and finally ensembled an own algorithm which meets almost all of the previously gathered requirements.
In the figures, the red area indicates an anomalous region. When the algorithm detects an anomaly outside this area, it is a false-positive (should be minimized as much as possible), when the algorithm detects an anomaly inside the red area, it is a true-positive (we wanted to detect this). The darker blue line shows the measured values and the lighter blue lines are the confidence intervals (the maximum and minimum allowed deviation of the measured values). To compare the different algorithms and allowing calculating an evaluation score for each of them, the datasets were auto-generated including the anomalous data points and labels.
Very strong trends in the data set are still tricky to handle, especially at the beginning of the measurements, because it is difficult to distinguish between a normal and an anomalous change. Welcome to the topic of anomaly detection! ;-)
» The good
I took enough time to deep dive into the topic (but it is still a huge topic!) and came up with a good algorithm, which is very resource friendly (no loops over the whole dataset, just incremental updates).
» The bad
During my studies, I messed up my Python installation and only the macOS built-in Python 2 worked ¯\(ツ)_/¯
As the topic was bigger than expected, the chapter about the production use case (e.g. using influxDB and kapacitor) was neglected.
» Technologies used
python, tensorflow, keras, docker, influxDB, kapacitor