TY - GEN
T1 - Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees
AU - Vilone, Giulia
AU - Longo, Luca
N1 - Publisher Copyright:
© The Author(s) 2026.
PY - 2026
Y1 - 2026
N2 - Rule-based methods are often used to learn surrogates of black-box models within Explainable Artificial Intelligence. Decision trees, among others, are routinely used for such purposes and inherently possess more explainability. Unfortunately, they might be convoluted in large-scale scenarios, with large sizes and many branches, thus hampering such inherent property. They also fail at modelling contrastive information and conflictuality among rules. This research proposes a novel method based on computational argumentation that aims to solve such shortcomings of decision trees. In particular, it proposes a mechanism for automatically extracting rules from trained dense neural networks, the arguments. It then describes a procedure for automatically extracting their conflicts using the notion of attacks. Arguments and attacks are integrated into argumentation frameworks, which are directed graphs that can be used as surrogate models for explaining black boxes. The dialectical status of the arguments in such graphs can be evaluated with formal semantics and then aggregated toward a rational outcome corresponding to the target classes of the black-box models. Such graphs are empirically evaluated against eight objective metrics, including completeness, correctness, fidelity, robustness, number of rules, average rule length, fraction of classes and fraction overlap. They are also compared with the corresponding surrogate decision trees. Findings show how argumentation graphs are highly comparable to decision trees regarding explainability across selected objective metrics. However, it is potentially more appealing given that argumentation graphs offer richer justification and explanations by modelling rules’ conflictuality.
AB - Rule-based methods are often used to learn surrogates of black-box models within Explainable Artificial Intelligence. Decision trees, among others, are routinely used for such purposes and inherently possess more explainability. Unfortunately, they might be convoluted in large-scale scenarios, with large sizes and many branches, thus hampering such inherent property. They also fail at modelling contrastive information and conflictuality among rules. This research proposes a novel method based on computational argumentation that aims to solve such shortcomings of decision trees. In particular, it proposes a mechanism for automatically extracting rules from trained dense neural networks, the arguments. It then describes a procedure for automatically extracting their conflicts using the notion of attacks. Arguments and attacks are integrated into argumentation frameworks, which are directed graphs that can be used as surrogate models for explaining black boxes. The dialectical status of the arguments in such graphs can be evaluated with formal semantics and then aggregated toward a rational outcome corresponding to the target classes of the black-box models. Such graphs are empirically evaluated against eight objective metrics, including completeness, correctness, fidelity, robustness, number of rules, average rule length, fraction of classes and fraction overlap. They are also compared with the corresponding surrogate decision trees. Findings show how argumentation graphs are highly comparable to decision trees regarding explainability across selected objective metrics. However, it is potentially more appealing given that argumentation graphs offer richer justification and explanations by modelling rules’ conflictuality.
KW - Computational Argumentation
KW - Decision-trees
KW - Deep learning
KW - Dense Neural Networks
KW - Explainable AI
KW - Rule-based systems
KW - Surrogate models
UR - https://www.scopus.com/pages/publications/105020749853
U2 - 10.1007/978-3-032-08333-3_5
DO - 10.1007/978-3-032-08333-3_5
M3 - Conference contribution
AN - SCOPUS:105020749853
SN - 9783032083326
T3 - Communications in Computer and Information Science
SP - 89
EP - 112
BT - Explainable Artificial Intelligence - 3rd World Conference, xAI 2025, Proceedings
A2 - Guidotti, Riccardo
A2 - Schmid, Ute
A2 - Longo, Luca
PB - Springer Science and Business Media Deutschland GmbH
T2 - 3rd World Conference on Explainable Artificial Intelligence, xAI 2025
Y2 - 9 July 2025 through 11 July 2025
ER -