Explaining Anomalies Detected by Autoencoders Using SHAP

Liat Antwarg, Ronnie Mindlin Miller, Bracha Shapira, Lior Rokach

Research output: Working paperPreprint

Abstract

Anomaly detection algorithms are often thought to be limited because they don't facilitate the process of validating results performed by domain experts. In Contrast, deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however the manual validation of results becomes challenging without additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) has been shown to be effective in explaining various supervised learning models. In this research, we extend SHAP to explain anomalies detected by an autoencoder, an unsupervised model. The proposed method extracts and visually depicts both the features that most contributed to the anomaly and those that offset it. A preliminary experimental study using real world data demonstrates the usefulness of the proposed method in assisting the domain experts to understand the anomaly and filtering out the uninteresting anomalies, aiming at minimizing the false positive rate of detected anomalies.
Original languageAmerican English
DOIs
StatePublished - 6 Mar 2019

Keywords

  • cs.GT
  • cs.LG
  • stat.ML

Fingerprint

Dive into the research topics of 'Explaining Anomalies Detected by Autoencoders Using SHAP'. Together they form a unique fingerprint.

Cite this