A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence

Research output: Contribution to journalConference articlepeer-review

8 Citations (Scopus)

Abstract

The ultimate goal of Explainable Artificial Intelligence is to build models that possess both high accuracy and degree of explainability. Understanding the inferences of such models can be seen as a process that discloses the relationships between their input and output. These relationships can be represented as a set of inference rules which are usually not explicit within a model. Scholars have proposed several methods for extracting rules from data-driven machine-learned models. However, limited work exist on their comparison. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by four post-hoc rule extractors by employing six quantitative metrics. Findings demonstrate that these metrics can actually help identify superior methods over the others thus are capable of successfully modelling distinctively aspects of explainability.

Original languageEnglish
Pages (from-to)85-96
Number of pages12
JournalCEUR Workshop Proceedings
Volume2771
DOIs
Publication statusPublished - 2020
Event28th Irish Conference on Artificial Intelligence and Cognitive Science, AICS 2020 - Dublin, Ireland
Duration: 7 Dec 20208 Dec 2020

Keywords

  • Explainable artificial intelligence
  • Method comparison and evaluation
  • Rule extraction

Fingerprint

Dive into the research topics of 'A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence'. Together they form a unique fingerprint.

Cite this