In This Section
Clean Audio for TV broadcast: An Object-Based Approach for Hearing-Impaired Viewers - April 2015
Audibility of a CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback - September 2007
Sound Board: Food for Thought, Aesthetics in Orchestra Recording - April 2015
AES Convention Papers Forum
Speech Separation with Microphone Arrays Using the Mean Shift Algorithm
Microphone arrays provide spatial resolution that is useful for speech source separation due to the fact that sources located in different positions cause different time and level differences in the elements of the array. This feature can be combined with time-frequency masking in order to separate speech mixtures by means of clustering techniques, such as the so-called DUET algorithm, which uses only two microphones. However, there are applications where larger arrays are available, and the separation can be performed using all these microphones. A speech separation algorithm based on mean shift clustering technique has been recently proposed using only two microphones. In this work the aforementioned algorithm is generalized for arrays of any number of microphones, testing its performance with echoic speech mixtures. The results obtained show that the generalized mean shift algorithm notably outperforms the results obtained by the original DUET algorithm.
No AES members have commented on this paper yet.
Subscribe to this discussion
Start a discussion!
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.