Towards measuring continuous acoustic feature convergence in unconstrained spoken dialogues

Spyros Kousidis, David Dorran, Yi Wang, Brian Vaughan, Charlie Cullen, Dermot Campbell, Ciaran McDonnell, Eugene Coyle

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Acoustic/prosodic feature (a/p) convergence has been known to occur both in dialogues between humans, as well as in human-computer interactions. Understanding the form and function of convergence is desirable for developing next generation conversational agents, as this will help increase speech recognition performance and naturalness of synthesized speech. Currently, the underlying mechanisms by which continuous and bi-directional convergence occurs are not well understood. In this study, a direct comparison between time-aligned frames shows significant similarity in acoustic feature variation between the two speakers. The method described (TAMA) constitutes a first step towards a quantitative analysis of a/p convergence.

Original languageEnglish
Title of host publicationNinth Annual Conference of the International Speech Communication Association
Pages1692-1695
Number of pages4
Publication statusPublished - 2008
EventINTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association - Brisbane, QLD, Australia
Duration: 22 Sep 200826 Sep 2008

Publication series

NameProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
PublisherInternational Speech Communication Association
ISSN (Print)2308-457X

Conference

ConferenceINTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association
Country/TerritoryAustralia
CityBrisbane, QLD
Period22/09/0826/09/08

Keywords

  • Acoustic-prosodic convergence
  • Dialogue speech

Fingerprint

Dive into the research topics of 'Towards measuring continuous acoustic feature convergence in unconstrained spoken dialogues'. Together they form a unique fingerprint.

Cite this