A solution to produce virtual sound environments based on the physical characteristics of a modeled complex volume is described. The goal is to reproduce, in real time, the sound field depending on the position of the listener and to allow some interactivity (change in material characteristics for instance). First an adaptive beam tracing algorithm is used to compute a geometrical solution between the sources and several positions inside that volume. This algorithm is not limited to polygonal faces and handles diffraction. Then, the precomputed paths, once ordered and selected, are auralized and an adaptive artificial reverberation is used. New techniques to allow fast and accurate rendering are detailed. The proposed approach provides accurate audio rendering on headphones or within advanced multi-user immersive environments.
Authors:
Bouatouch, Kadi; Deille, Olivier; Maillard, Julien; Martin, Jacques; NoƩ, Nicolas
Affiliations:
CSTB; IRISA(See document for exact affiliation information.)
AES Convention:
120 (May 2006)
Paper Number:
6743
Publication Date:
May 1, 2006
Subject:
Room and Architectural Acoustics
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.