Mixed-reality audio systems require a binaural simulation process that perceptually matches the reverberation properties of the local environment surrounding the listener. This is necessary to create experiences where digital audio objects are perceptually congruent with real sounds heard in the user’s environment. Since the playback environment is not determined until rendering time, we need an audio scene description and rendering model that decouples the acoustical properties of individual sound objects from those of the environment. An important aspect of this model is the control of reverberation loudness for each sound object. We provide a method for reverberation loudness parametrization and real-time control in mixed-reality and other interactive audio applications where the decoupling of sound-source properties and environment properties is desired.
Authors:
Audfray, Rémi; Jot, Jean-Marc
Affiliation:
Magic Leap, Inc., Sunnyvale, CA, USA
AES Conference:
2019 AES International Conference on Headphone Technology (August 2019)
Paper Number:
28
Publication Date:
August 21, 2019
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.