The authors develop a 4-pi sampling reverberator, named “VSVerb,” which restores a 4-pi reverberant field by using information of virtual sound sources which are captured in a target space. Acoustical properties of virtual sound sources are detected from sound intensities which are calculated from impulse responses measured by an Ambisonic A-format microphone. Spatial information of the virtual sound sources is translated into time responses, then the 4-pi spatial reverberation is obtained. Several schemes for detecting accurate acoustic properties of virtual sound sources have been developed so far, and their practicalities have been examined under various playback environments, e.g, 5.1.4ch, 7.1.4ch, 22.2ch, 24ch and 40ch. This time, the authors introduce an application example for implementation of the VSVerb into VR content production. Key features are 1) Binaural rendering by using an individual HRIR which is given as an AES69 sofa format and 2) Dynamic processing of changing reverberation with chasing listener’s movement.
Authors:
Nakahara, Masataka; Nagatomo, Yasuhiko; Omoto, Akira
Affiliations:
ONFUTURE Ltd, Tokyo, Japan; SONA Corporation, Tokyo, Japan; Evixar Inc., Tokyo, Japan; Kyushu University, Fukuoka, Japan(See document for exact affiliation information.)
AES Convention:
149 (October 2020)
eBrief:634
Publication Date:
October 22, 2020
Subject:
Immersive Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.