Collecting multi-modal data of human-robot interaction

Jing Guang Han, John Dalton, Brian Vaughan, Catharine Oertel, Ciaran Dougherty, Celine De Looze, Nick Campbell

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper presents a method and a tool to collect data of initiation stages in conversational interaction in order to better understand how humans talk to machines. This is done by means of a platform-robot which both facilitates conversations between a robot and human interlocutors and records the interaction using multi-modal technology. The system is provided with a standard face detection algorithm and an audio system and allows us to focus on the components of social interaction that make an interactive dialogue feel more natural.

Original languageEnglish
Title of host publication2011 2nd International Conference on Cognitive Infocommunications, CogInfoCom 2011
Publication statusPublished - 2011
Externally publishedYes
Event2011 2nd International Conference on Cognitive Infocommunications, CogInfoCom 2011 - Budapest, Hungary
Duration: 7 Jul 20119 Jul 2011

Publication series

Name2011 2nd International Conference on Cognitive Infocommunications, CogInfoCom 2011

Conference

Conference2011 2nd International Conference on Cognitive Infocommunications, CogInfoCom 2011
Country/TerritoryHungary
CityBudapest
Period7/07/119/07/11

Keywords

  • audio interface
  • face detection
  • human-robot interaction
  • multi-modal data collection
  • platform-robot

Fingerprint

Dive into the research topics of 'Collecting multi-modal data of human-robot interaction'. Together they form a unique fingerprint.

Cite this