In augmented reality (AR) applications, reproduction of acoustic reverberation is essential for creating an immersive audio experience. The audio component of an AR experience should simulate the acoustics of the environment that users are experiencing. Earlier, sound engineers could program all the reverberation parameters in advance for a scene or if the audience was in a fixed position. However, adjusting the reverberation parameters using conventional methods is difficult because all such parameters cannot be programmed for AR applications. Considering that skilled acoustic engineers can estimate reverberation parameters from an image of a room, we trained a deep neural network (DNN) to estimate reverberation parameters from two-dimensional images. The results suggest a DNN can estimate the acoustic reverberation parameters from one image.
Authors:
Kon, Homare; Koike, Hideki
Affiliations:
Tokyo Institute of Technology, Ota-ku, Tokyo, Japan; Tokyo Institute of Technology, Meguro-ku, Tokyo, Japan(See document for exact affiliation information.)
AES Convention:
144 (May 2018)
Paper Number:
9995
Publication Date:
May 14, 2018
Subject:
Audio Processing and Effects – Part 1
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.