Abstract
Dialogues between humans and robots are necessarily situated and so, often, a shared visual context is present. Exophoric references are very frequent in situated dialogues, and are particularly important in the presence of a shared visual context - for example when a human is verbally guiding a tele-operated mobile robot. We present an approach to automatically resolving exophoric referring expressions in a situated dialogue based on the visual salience of possible referents. We evaluate the effectiveness of this approach and a range of different salience metrics using data from the SCARE corpus which we have augmented with visual information. The results of our evaluation show that our computationally lightweight approach is successful, and so promising for use in human-robot dialogue systems.
| Original language | English |
|---|---|
| DOIs | |
| Publication status | Published - 2010 |
| Event | AAAI Symposium on Dialog with Robots - Arlington, United States Duration: 11 Nov 2010 → 13 Nov 2010 |
Conference
| Conference | AAAI Symposium on Dialog with Robots |
|---|---|
| Country/Territory | United States |
| City | Arlington |
| Period | 11/11/10 → 13/11/10 |
Keywords
- situated dialogues
- shared visual context
- exophoric references
- tele-operated mobile robot
- visual salience
- SCARE corpus
- human-robot dialogue systems