Exploring the potential of defeasible argumentation for quantitative inferences in real-world contexts: An assessment of computational trust

Research output: Contribution to journalConference articlepeer-review

Abstract

Argumentation has recently shown appealing properties for inference under uncertainty and conflicting knowledge. However, there is a lack of studies focused on the examination of its capacity of exploiting real-world knowledge bases for performing quantitative, case-by-case inferences. This study performs an analysis of the inferential capacity of a set of argument-based models, designed by a human reasoner, for the problem of trust assessment. Precisely, these models are exploited using data from Wikipedia, and are aimed at inferring the trustworthiness of its editors. A comparison against non-deductive approaches revealed that these models were superior according to values inferred to recognised trustworthy editors. This research contributes to the field of argumentation by employing a replicable modular design which is suitable for modelling reasoning under uncertainty applied to distinct real-world domains.

Original languageEnglish
Pages (from-to)37-48
Number of pages12
JournalCEUR Workshop Proceedings
Volume2771
DOIs
Publication statusPublished - 2020
Event28th Irish Conference on Artificial Intelligence and Cognitive Science, AICS 2020 - Dublin, Ireland
Duration: 7 Dec 20208 Dec 2020

Keywords

  • Argumentation Theory
  • Computational Trust
  • Defeasible Argumentation
  • Explainable Artificial Intelligence
  • Non-monotonic Reasoning

Fingerprint

Dive into the research topics of 'Exploring the potential of defeasible argumentation for quantitative inferences in real-world contexts: An assessment of computational trust'. Together they form a unique fingerprint.

Cite this