A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI

Lucas Rizzo, Damiano Verda, Serena Berretta, Luca Longo

Research output: Contribution to journalArticlepeer-review

Abstract

Explainable Artificial Intelligence (XAI) is a research area that clarifies AI decision-making processes to build user trust and promote responsible AI. Hence, a key scientific challenge in XAI is the development of methods that generate transparent and interpretable explanations while maintaining scalability and effectiveness in complex scenarios. Rule-based methods in XAI generate rules that can potentially explain AI inferences, yet they can also become convoluted in large scenarios, hindering their readability and scalability. Moreover, they often lack contrastive explanations, leaving users uncertain why specific predictions are preferred. To address this scientific problem, we explore the integration of computational argumentation—a sub-field of AI that models reasoning processes through defeasibility—into rule-based XAI systems. Computational argumentation enables arguments modelled from rules to be retracted based on new evidence. This makes it a promising approach to enhancing rule-based methods for creating more explainable AI systems. Nonetheless, research on their integration remains limited despite the appealing properties of rule-based systems and computational argumentation. Therefore, this study also addresses the applied challenge of implementing such an integration within practical AI tools. The study employs the Logic Learning Machine (LLM), a specific rule-extraction technique, and presents a modular design that integrates input rules into a structured argumentation framework using state-of-the-art computational argumentation methods. Experiments conducted on binary classification problems using various datasets from the UCI Machine Learning Repository demonstrate the effectiveness of this integration. The LLM technique excelled in producing a manageable number of if-then rules with a small number of premises while maintaining high inferential capacity for all datasets. In turn, argument-based models achieved comparable results to those derived directly from if-then rules, leveraging a concise set of rules and excelling in explainability. In summary, this paper introduces a novel approach for efficiently and automatically generating arguments and their interactions from data, addressing both scientific and applied challenges in advancing the application and deployment of argumentation systems in XAI.

Original languageEnglish
Pages (from-to)2049-2073
Number of pages25
JournalMachine Learning and Knowledge Extraction
Volume6
Issue number3
DOIs
Publication statusPublished - Sep 2024

Keywords

  • computational argumentation
  • defeasible reasoning
  • explainable artificial intelligence
  • rule-base AI

Fingerprint

Dive into the research topics of 'A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI'. Together they form a unique fingerprint.

Cite this