Abstract
Explaining the logic of a data-driven Machine Learning (ML) model can be seen as a defeasible reasoning process that is likely non-monotonic. This means a conclusion linked to a set of premises can be withdrawn when new information becomes available. Argumentation Theory (AT) formalises reasoning with a defeasible knowledge base. Abstract Argumentation Frameworks (AAF) organise conflicting arguments in a dialogical structure, allowing formal semantics to resolve conflicts. This study proposes an XAI method for automatically forming an AAF-based representation, using weighted attacks to model conflictual information. The concept of inconsistency budget is employed to eliminate the weakest attacks. Findings showed that the variation of the inconsistency budget could affect, albeit limited, the evaluation metrics computed over the resulting rulesets.
Original language | English |
---|---|
Pages (from-to) | 53-58 |
Number of pages | 6 |
Journal | CEUR Workshop Proceedings |
Volume | 3554 |
Publication status | Published - 2023 |
Event | Joint 1st World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium, xAI-2023: LB-D-DC - Lisbon, Portugal Duration: 26 Jul 2023 → 28 Jul 2023 |
Keywords
- Argumentation
- Automatic attack extraction
- Explainable artificial intelligence
- Inconsistency budget
- Non-monotonic reasoning
- Weighted argumentation frameworks