Designing and implementing a platform for collecting multi-modal data of human-robot interaction

Brian Vaughan, Jing Guang Han, Emer Gilmartin, Nick Campbell

Research output: Contribution to journalArticlepeer-review

Abstract

This paper details a method of collecting video and audio recordings of people interacting with a simple robot interlocutor. The interaction is recorded via a number of cameras and microphones mounted on and around the robot. The system utilised a number of technologies to engage with interlocutors including OpenCV, Python, and Max MSP. Interactions over a three month period were collected at The Science Gallery in Trinity College Dublin. Visitors to the gallery freely engaged with the robot, with interactions on their behalf being spontaneous and non-scripted. The robot dialogue was a set pattern of utterances to engage interlocutors in a simple conversation. A large number of audio and video recordings were collected over a three month period.

Original languageEnglish
Pages (from-to)7-17
Number of pages11
JournalActa Polytechnica Hungarica
Volume9
Issue number1
Publication statusPublished - 2012
Externally publishedYes

Keywords

  • Audio interface
  • Face detection
  • Human-robot interaction
  • Multi-modal data collection
  • Platformrobot
  • WOZ

Fingerprint

Dive into the research topics of 'Designing and implementing a platform for collecting multi-modal data of human-robot interaction'. Together they form a unique fingerprint.

Cite this