Abstract
The ultimate goal of Explainable Artificial Intelligence is to build models that possess both high accuracy and degree of explainability. Understanding the inferences of such models can be seen as a process that discloses the relationships between their input and output. These relationships can be represented as a set of inference rules which are usually not explicit within a model. Scholars have proposed several methods for extracting rules from data-driven machine-learned models. However, limited work exist on their comparison. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by four post-hoc rule extractors by employing six quantitative metrics. Findings demonstrate that these metrics can actually help identify superior methods over the others thus are capable of successfully modelling distinctively aspects of explainability.
Original language | English |
---|---|
Pages (from-to) | 85-96 |
Number of pages | 12 |
Journal | CEUR Workshop Proceedings |
Volume | 2771 |
DOIs | |
Publication status | Published - 2020 |
Event | 28th Irish Conference on Artificial Intelligence and Cognitive Science, AICS 2020 - Dublin, Ireland Duration: 7 Dec 2020 → 8 Dec 2020 |
Keywords
- Explainable artificial intelligence
- Method comparison and evaluation
- Rule extraction