In augmented-reality (AR) applications, reproducing acoustic reverberation is essential for the immersive audio experience. The audio components of an AR system should simulate the acoustics of the environment that is experienced by the users. Earlier, in virtual–reality (VR) applications, sound engineers could program all of the reverberation parameters for a particular scene in advance or when the user is at a fixed position. However, adjusting the reverberation parameters using conventional procedures is difficult because the unlimited range of such parameters cannot be programmed for AR applications. Therefore, it is necessary to dynamically estimate the reverberation characteristics based on the environments in which the users move. Considering that skilled acoustic engineers can estimate the reverberation parameters using the images of a room without performing any measurements, we trained convolutional neural networks to estimate the reverberation parameters using two–dimensional images. The proposed method does not require the simulations of sound propagation using 3D reconstruction techniques.
Kon, Homare; Koike, Hideki
Affiliation: School of Computing, Tokyo Institute of Technology, Tokyo, Japan
JAES Volume 67 Issue 7/8 pp. 540-548; July 2019
Publication Date: August 14, 2019
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.