Audiovisuelle Sprechererkennung durch linguistisch naive Personen

Auteurs-es

  • Sibylle Sutter Phonetisches Laboratorium der Universität Zürich
  • Volker Dellwo Phonetisches Laboratorium der Universität Zürich

DOI :

https://doi.org/10.26034/tranel.2013.2951

Résumé

Human speech perception is not only based on acoustic speech signals but also on visual cues like lip or jaw movements. Based on this assumption we used a between-subject design to test listeners’ speaker identification ability in a voice line-up after they were familiarized with a speaker under either of the following condition: (a) visual and degraded acoustic information, (b) degraded acoustic information only, and (c) visual information only. The results from this experiment indicate that listeners are able to perform the identification task to a considerable degree under all three experimental conditions. We conclude that listeners’ identification ability of speakers based on degraded acoustic material is about as good as their identification ability based on visual speech cues. The combination of acoustic and visual cues does not enhance listeners’ performance.

Téléchargements

Publié-e

01-01-2013

Comment citer

Sutter, S., & Dellwo, V. (2013). Audiovisuelle Sprechererkennung durch linguistisch naive Personen. Travaux neuchâtelois De Linguistique, (59), 167–181. https://doi.org/10.26034/tranel.2013.2951

Numéro

Rubrique

Article thématique