Abstract
Explainable Artificial Intelligence (XAI) aims to train data-driven, machine learning (ML) models possessing both high predictive accuracy and a high degree of explainability for humans. Comprehending and explaining the inferences of a model can be seen as a defeasible reasoning process which is expected to be non-monotonic meaning that a conclusion, linked to a set of premises, can be withdrawn when new information becomes available. Computational argumentation, a paradigm within Artificial Intelligence (AI), focuses on modeling defeasible reasoning. This research study explored a new way for the automatic formation of an argument-based representation of the inference process of a data-driven ML model to enhance its explainability by employing principles and techniques from computational argumentation, including weighted attacks within its argumentation process. An experiment was conducted on five datasets to test, in an objective manner, if the explanations of the proposed XAI method are more comprehensible than decision trees, which are considered naturally transparent. Findings demonstrate that usually the argument-based method can represent the logic of the model with fewer rules than a decision tree, but further work is required to achieve the same performances in terms of other characteristics, such as fidelity to the model.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 3209 |
Publication status | Published - 2022 |
Event | 1st International Workshop on Argumentation for eXplainable AI, ArgXAI 2022 - Cardiff, United Kingdom Duration: 12 Sep 2022 → … |
Keywords
- Argumentation
- Explainable artificial intelligence
- Method evaluation
- Metrics of explainability
- Non-monotonic reasoning