Abstract
In many real-world environments, Automatic Speech Recognition (ASR) technologies fail to provide adequate performance for applications such as human robot dialog. Despite substantial evidence that speech recognition in humans is performed in a top-down as well as bottom-up manner, ASR systems typically fail to capitalize on this, instead relying on a purely statistical, bottom up methodology. In this paper we advocate the use of a knowledge based approach to improving ASR in domains such as mobile robotics. A simple implementation is presented, which uses the visual recognition of objects in a robot's environment to increase the probability that words and sentences related to these objects will be recognized.
| Original language | English |
|---|---|
| Pages (from-to) | 1557-1559 |
| Number of pages | 3 |
| Journal | IJCAI International Joint Conference on Artificial Intelligence |
| Publication status | Published - 2003 |
| Event | 18th International Joint Conference on Artificial Intelligence, IJCAI 2003 - Acapulco, Mexico Duration: 9 Aug 2003 → 15 Aug 2003 |