Cutthroat evolution has given us seemingly magical abilities to hear speech in complex environments. We can tell instantly, independent of timbre or loudness, if a sound is close to us, and in a crowded room we can switch attention at will between at least three different simultaneous conversations. And we involuntarily switch attention if our name is spoken. These feats are only possible if, without conscious attention, each voice has been separated into an independent neural stream. We believe the separation process relies on the phase relationships between the harmonics above 1000 Hz that encode speech information, and the neurology of the inner ear that has evolved to detect them. When phase is undisturbed, once in each fundamental period harmonic phases align to create massive peaks in the sound pressure at the fundamental frequency. Pitch-sensitive filters can detect and separate these peaks from each other and from noise with amazing acuity. But reflections and sound systems randomize phases, with serious effects on attention, source separation, and intelligibility. This talk will detail the many ways ears and speech have co-evolved, and recent work on the importance of phase in acoustics and sound design.
Author:
Griesinger, David
Affiliation:
David Griesinger Acoustics, Cambridge, MA, USA
AES Convention:
141 (September 2016)
Paper Number:
9659
Publication Date:
September 20, 2016
Subject:
Perception
Download Now (459 KB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.