Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Rule-based methods are often used to learn surrogates of black-box models within Explainable Artificial Intelligence. Decision trees, among others, are routinely used for such purposes and inherently possess more explainability. Unfortunately, they might be convoluted in large-scale scenarios, with large sizes and many branches, thus hampering such inherent property. They also fail at modelling contrastive information and conflictuality among rules. This research proposes a novel method based on computational argumentation that aims to solve such shortcomings of decision trees. In particular, it proposes a mechanism for automatically extracting rules from trained dense neural networks, the arguments. It then describes a procedure for automatically extracting their conflicts using the notion of attacks. Arguments and attacks are integrated into argumentation frameworks, which are directed graphs that can be used as surrogate models for explaining black boxes. The dialectical status of the arguments in such graphs can be evaluated with formal semantics and then aggregated toward a rational outcome corresponding to the target classes of the black-box models. Such graphs are empirically evaluated against eight objective metrics, including completeness, correctness, fidelity, robustness, number of rules, average rule length, fraction of classes and fraction overlap. They are also compared with the corresponding surrogate decision trees. Findings show how argumentation graphs are highly comparable to decision trees regarding explainability across selected objective metrics. However, it is potentially more appealing given that argumentation graphs offer richer justification and explanations by modelling rules’ conflictuality.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence - 3rd World Conference, xAI 2025, Proceedings
EditorsRiccardo Guidotti, Ute Schmid, Luca Longo
PublisherSpringer Science and Business Media Deutschland GmbH
Pages89-112
Number of pages24
ISBN (Print)9783032083326
DOIs
Publication statusPublished - 2026
Event3rd World Conference on Explainable Artificial Intelligence, xAI 2025 - Istanbul, Turkey
Duration: 9 Jul 202511 Jul 2025

Publication series

NameCommunications in Computer and Information Science
Volume2580 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference3rd World Conference on Explainable Artificial Intelligence, xAI 2025
Country/TerritoryTurkey
CityIstanbul
Period9/07/2511/07/25

Keywords

  • Computational Argumentation
  • Decision-trees
  • Deep learning
  • Dense Neural Networks
  • Explainable AI
  • Rule-based systems
  • Surrogate models

Fingerprint

Dive into the research topics of 'Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees'. Together they form a unique fingerprint.

Cite this