Improving speech recognition on a mobile robot platform through the use of top-down visual queues

Robert J. Ross, R. P.S. O'Donoghue, G. M.P. O'Hare

Research output: Contribution to journalConference articlepeer-review

Abstract

In many real-world environments, Automatic Speech Recognition (ASR) technologies fail to provide adequate performance for applications such as human robot dialog. Despite substantial evidence that speech recognition in humans is performed in a top-down as well as bottom-up manner, ASR systems typically fail to capitalize on this, instead relying on a purely statistical, bottom up methodology. In this paper we advocate the use of a knowledge based approach to improving ASR in domains such as mobile robotics. A simple implementation is presented, which uses the visual recognition of objects in a robot's environment to increase the probability that words and sentences related to these objects will be recognized.

Original languageEnglish
Pages (from-to)1557-1559
Number of pages3
JournalIJCAI International Joint Conference on Artificial Intelligence
Publication statusPublished - 2003
Event18th International Joint Conference on Artificial Intelligence, IJCAI 2003 - Acapulco, Mexico
Duration: 9 Aug 200315 Aug 2003

Fingerprint

Dive into the research topics of 'Improving speech recognition on a mobile robot platform through the use of top-down visual queues'. Together they form a unique fingerprint.

Cite this