Structural descriptions in human-assisted robot visual learning

Geert Jan M. Kruijff, John D. Kelleher, Gregor Berginc, Aleš Leonardis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The paper presents an approach to using structural descriptions, obtained through a human-robot tutoring dialogue, as labels for the visual object models a robot learns, The paper shows how structural descriptions enable relating models for different aspects of one and the same object, and how being able to relate descriptions for visual models and discourse referents enables incremental updating of model descriptions through dialogue (either robot- or human-initiated). The approach has been implemented in an integrated architecture for human-assisted robot visual learning.

Original languageEnglish
Title of host publicationHRI 2006
Subtitle of host publicationProceedings of the 2006 ACM Conference on Human-Robot Interaction - Toward Human Robot Collaboration
PublisherAssociation for Computing Machinery (ACM)
Pages343-344
Number of pages2
ISBN (Print)1595932941, 9781595932945
DOIs
Publication statusPublished - 2006
Externally publishedYes
EventHRI 2006: 2006 ACM Conference on Human-Robot Interaction - Salt Lake City, Utah, United States
Duration: 2 Mar 20064 Mar 2006

Publication series

NameHRI 2006: Proceedings of the 2006 ACM Conference on Human-Robot Interaction
Volume2006

Conference

ConferenceHRI 2006: 2006 ACM Conference on Human-Robot Interaction
Country/TerritoryUnited States
CitySalt Lake City, Utah
Period2/03/064/03/06

Keywords

  • Cognitive vision and learning
  • Natural language dialogue

Fingerprint

Dive into the research topics of 'Structural descriptions in human-assisted robot visual learning'. Together they form a unique fingerprint.

Cite this