An immersive audio system oriented to future communication applications is presented. The aim is to build a system where the acoustic field of a chamber is recorded using a microphone array and then is reconstructed or rendered again, in a different chamber using loudspeaker array based techniques. Our proposal relays on recent robust adaptive beamforming techniques and joint audio-video source localization for effectively estimating the original sources of the emitting room. The estimated source and the source localization information drive a Wave Field Synthesis engine that renders the acoustic field again at the receiving chamber. The overall system performance is tested using a MUSHRA-based subjective test in a real situation.
Authors:
Beracoechea, Jon Ander; Casajus, Javier; GarcĂa, Lino; Ortiz, Luis; Torres-Guijarro, Soledad
Affiliation:
Universidad Politécnica de Madrid
AES Convention:
120 (May 2006)
Paper Number:
6710
Publication Date:
May 1, 2006
Subject:
Multichannel Sound
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.