In headphone-based augmented reality audio applications, computer-generated audio-visual objects are rendered over headphones or ear buds and blended into a natural audio environment. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not distinguishable from sounds occurring naturally or reproduced over loudspeakers. Solutions involving the measurement or calculation of binaural room impulse responses in a consumer environment are limited by practical obstacles and complexity. We propose an approach exploiting a statistical reverberation model, enabling practical acoustical environment characterization and computationally efficient reflection and reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality”–enabling natural-sounding, externalized virtual 3-D audio reproduction of music, movie or game soundtracks.
Authors:
Jot, Jean-Marc; Lee, Keun Sup
Affiliations:
DTS, Inc., Los Gatos, CA, USA; Apple Inc., Cupertino, CA, USA(See document for exact affiliation information.)
AES Conference:
2016 AES International Conference on Audio for Virtual and Augmented Reality (September 2016)
Paper Number:
8-2
Publication Date:
September 21, 2016
Subject:
Capture, Rendering, and Mixing for VR
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.