Following recent trends of fully immersive virtual reality (VR) and augmented reality (AR) applications, ISO/IEC JTC1 SC29 WG6, MPEG Audio coding, decided to create the MPEG-I Audio work-item for standardizing a solution for audio rendering in such applications, in which the user can navigate and interact with the environment using 6 degrees of freedom (6DoF). One of the main capabilities of MPEG-I Audio will be the support of real-time modeling of acoustic occlusion and diffraction effects for geometrically complex VR/AR scenes, including a high degree of user interactivity. This can be achieved by employing a voxel-based representation of sound-occluding scene elements in combination with computationally efficient rendering algorithms, operating on uniform 3D voxel grids and their 2D projections. This paper describes the chosen reference model architecture for voxel-based acoustic occlusion and diffraction modeling, operating modes and envisioned applications. In addition, it summarizes the current status of the MPEG-I Audio standardization process.
Authors:
Terentiv, Leon; Fersch, Christof; Fischer, Daniel; Setiawan, Panji
Affiliation:
Dolby Germany GmbH, Nuremberg, Germany
AES Conference:
2022 AES International Conference on Audio for Virtual and Augmented Reality (August 2022)
Paper Number:
10
Publication Date:
August 15, 2022
Subject:
Paper
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.