A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection

Bujar Raufi, Ciaran Finnegan, Luca Longo

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Financial institutions heavily rely on advanced Machine Learning algorithms to screen transactions. However, they face increasing pressure from regulators and the public to ensure AI accountability and transparency, particularly in credit card fraud detection. While ML technology has effectively detected fraudulent activity, the opacity of Artificial Neural Networks (ANN) can make it challenging to explain decisions. This has prompted a recent push for more explainable fraud prevention tools. Although vendors claim to improve detection rates, integrating explanation data is still early. Data scientists recognize the potential of Explainable AI (XAI) techniques in fraud prevention, but comparative research on their effectiveness is lacking. This paper aims to advance the comparative research on credit card fraud detection by statistically evaluating established XAI methods. The goal is to explain and validate the fraud detection black-box machine learning model, where the baseline model used for explanation is an ANN trained with a large dataset of 25,128 instances. Four explainability methods (SHAP, LIME, ANCHORS, and DiCE) are utilized, and the same test set is used to generate an explanation across all four methods. Analysis through the Friedman test indicates a statistical significance of the SHAP, ANCHORS, and DiCE results, validated with interpretability and reliability aspects of explanations such as identity, stability, separability, similarity, and computational complexity. The results indicated that SHAP, LIME, and ANCHORS methods exhibit better model interpretability regarding stability, separability, and similarity.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence - Second World Conference, xAI 2024, Proceedings
EditorsLuca Longo, Sebastian Lapuschkin, Christin Seifert
PublisherSpringer Science and Business Media Deutschland GmbH
Pages365-383
Number of pages19
ISBN (Print)9783031638022
DOIs
Publication statusPublished - 2024
Event2nd World Conference on Explainable Artificial Intelligence, xAI 2024 - Valletta, Malta
Duration: 17 Jul 202419 Jul 2024

Publication series

NameCommunications in Computer and Information Science
Volume2156 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference2nd World Conference on Explainable Artificial Intelligence, xAI 2024
Country/TerritoryMalta
CityValletta
Period17/07/2419/07/24

Keywords

  • ANCHORS
  • Credit Card Fraud Detection
  • Diverse Counterfactual Explanations
  • Explainable Artificial Intelligence
  • Interpretability
  • Local Interpretable Model-agnostic Explanation
  • methods comparison
  • SHapley Additive exPlanations (SHAP)

Fingerprint

Dive into the research topics of 'A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection'. Together they form a unique fingerprint.

Cite this