Information fusion for visual reference resolution in dynamic situated dialogue

Geert-Jan M. Kruijff, John D. Kelleher, Nick Hawes

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Human-Robot Interaction (HRI) invariably involves dialogue about objects in the environment in which the agents are situated. The paper focuses on the issue of resolving discourse references to such visual objects. The paper addresses the problem using strategies for intra-modal fusion (identifying that different occurrences concern the same object), and inter-modal fusion, (relating object references across different modalities). Core to these strategies are sensorimotoric coordination, and ontology-based mediation between content in different modalities. The approach has been fully implemented, and is illustrated with several working examples.

Original languageEnglish
Title of host publicationPerception and Interactive Technologies - International Tutorial and Research Workshop, PIT 2006 Proceedings
PublisherSpringer Verlag
Pages117-128
Number of pages12
ISBN (Print)3540347437, 9783540347439
DOIs
Publication statusPublished - 2006
EventInternational Tutorial and Research Workshop on Perception and Interactive Technologies, PIT 2006 - Kloster Irsee, Germany
Duration: 19 Jun 200621 Jun 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4021 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceInternational Tutorial and Research Workshop on Perception and Interactive Technologies, PIT 2006
Country/TerritoryGermany
CityKloster Irsee
Period19/06/0621/06/06

Fingerprint

Dive into the research topics of 'Information fusion for visual reference resolution in dynamic situated dialogue'. Together they form a unique fingerprint.

Cite this