Visual Salience and Reference Resolution in Situated Dialogues: A Corpus-based Evaluation.

Niels Schutte

Research output: Contribution to conferencePaperpeer-review

Abstract

Dialogues between humans and robots are necessarily situated and so, often, a shared visual context is present. Exophoric references are very frequent in situated dialogues, and are particularly important in the presence of a shared visual context - for example when a human is verbally guiding a tele-operated mobile robot. We present an approach to automatically resolving exophoric referring expressions in a situated dialogue based on the visual salience of possible referents. We evaluate the effectiveness of this approach and a range of different salience metrics using data from the SCARE corpus which we have augmented with visual information. The results of our evaluation show that our computationally lightweight approach is successful, and so promising for use in human-robot dialogue systems.
Original languageEnglish
DOIs
Publication statusPublished - 2010
EventAAAI Symposium on Dialog with Robots - Arlington, United States
Duration: 11 Nov 201013 Nov 2010

Conference

ConferenceAAAI Symposium on Dialog with Robots
Country/TerritoryUnited States
CityArlington
Period11/11/1013/11/10

Keywords

  • situated dialogues
  • shared visual context
  • exophoric references
  • tele-operated mobile robot
  • visual salience
  • SCARE corpus
  • human-robot dialogue systems

Fingerprint

Dive into the research topics of 'Visual Salience and Reference Resolution in Situated Dialogues: A Corpus-based Evaluation.'. Together they form a unique fingerprint.

Cite this