The use of wireless acoustic sensor networks carry many advantages in the speech separation framework. Since nodes are separated by greater distances than a few centimeters, they can cover rooms completely, although these new distances involve certain problems to be solved. For instance, important time differences of arrival between the speech mixtures captured at the different microphones can appear, affecting the performance of classical sound separation algorithms. One solution consists in synchronizing the speech mixtures captured at the microphones. Following with this idea, we put forward in this paper a new time delay estimation method that outperforms classical methods in order to synchronize speech mixtures. The results obtained show the feasibility of using our proposal aiming at synchronizing speech mixtures.
Authors:
Llerena-Aguilar, Cosme; Ramos-Auñón, Guillermo; Llerena-Aguilar, Francisco J.; Sánchez-Hevia, Héctor A.; Rosa-Zurera, Manuel
Affiliation:
University of Alcala, Alcalá de Henares, Madrid, Spain
AES Convention:
138 (May 2015)
Paper Number:
9298
Publication Date:
May 6, 2015
Subject:
Sound Localization and Separation
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.