TY - GEN
T1 - A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence
AU - Vilone, Giulia
AU - Longo, Luca
N1 - Publisher Copyright:
© 2022, IFIP International Federation for Information Processing.
PY - 2022
Y1 - 2022
N2 - One of the aim of Explainable Artificial Intelligence (XAI) is to equip data-driven, machine-learned models with a high degree of explainability for humans. Understanding and explaining the inferences of a model can be seen as a defeasible reasoning process. This process is likely to be non-monotonic: a conclusion, linked to a set of premises, can be retracted when new information becomes available. In formal logic, computational argumentation is a method, within Artificial Intelligence (AI), focused on modeling defeasible reasoning. This research study focuses on the automatic formation of an argument-based representation for a machine-learned model in order to enhance its degree of explainability, by employing principles and techniques from computational argumentation. It also contributes to the body of knowledge by introducing a novel quantitative human-centred technique to evaluate such a novel representation, and potentially other XAI methods, in the form of a questionnaire for explainability. An experiment have been conducted with two groups of human participants, one interacting with the argument-based representation, and one with a decision trees, a representation deemed naturally transparent and comprehensible. Findings demonstrate that the explainability of the original argument-based representation is statistically similar to that associated to the decision-trees, as reported by humans via the novel questionnaire.
AB - One of the aim of Explainable Artificial Intelligence (XAI) is to equip data-driven, machine-learned models with a high degree of explainability for humans. Understanding and explaining the inferences of a model can be seen as a defeasible reasoning process. This process is likely to be non-monotonic: a conclusion, linked to a set of premises, can be retracted when new information becomes available. In formal logic, computational argumentation is a method, within Artificial Intelligence (AI), focused on modeling defeasible reasoning. This research study focuses on the automatic formation of an argument-based representation for a machine-learned model in order to enhance its degree of explainability, by employing principles and techniques from computational argumentation. It also contributes to the body of knowledge by introducing a novel quantitative human-centred technique to evaluate such a novel representation, and potentially other XAI methods, in the form of a questionnaire for explainability. An experiment have been conducted with two groups of human participants, one interacting with the argument-based representation, and one with a decision trees, a representation deemed naturally transparent and comprehensible. Findings demonstrate that the explainability of the original argument-based representation is statistically similar to that associated to the decision-trees, as reported by humans via the novel questionnaire.
KW - Argumentation
KW - Explainability
KW - Explainable Artificial Intelligence
KW - Human-centred evaluation
KW - Non-monotonic reasoning
UR - http://www.scopus.com/inward/record.url?scp=85133268212&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-08333-4_36
DO - 10.1007/978-3-031-08333-4_36
M3 - Conference contribution
AN - SCOPUS:85133268212
SN - 9783031083327
T3 - IFIP Advances in Information and Communication Technology
SP - 447
EP - 460
BT - Artificial Intelligence Applications and Innovations - 18th IFIP WG 12.5 International Conference, AIAI 2022, Proceedings
A2 - Maglogiannis, Ilias
A2 - Iliadis, Lazaros
A2 - Macintyre, John
A2 - Cortez, Paulo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 18th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2022
Y2 - 17 June 2022 through 20 June 2022
ER -