Community

AES Conference Papers Forum

Voxel-based occlusion and diffraction modelling for the upcoming ISO/MPEG standard for VR and AR

Document Thumbnail

Following recent trends of fully immersive virtual reality (VR) and augmented reality (AR) applications, ISO/IEC JTC1 SC29 WG6, MPEG Audio coding, decided to create the MPEG-I Audio work-item for standardizing a solution for audio rendering in such applications, in which the user can navigate and interact with the environment using 6 degrees of freedom (6DoF). One of the main capabilities of MPEG-I Audio will be the support of real-time modeling of acoustic occlusion and diffraction effects for geometrically complex VR/AR scenes, including a high degree of user interactivity. This can be achieved by employing a voxel-based representation of sound-occluding scene elements in combination with computationally efficient rendering algorithms, operating on uniform 3D voxel grids and their 2D projections. This paper describes the chosen reference model architecture for voxel-based acoustic occlusion and diffraction modeling, operating modes and envisioned applications. In addition, it summarizes the current status of the MPEG-I Audio standardization process.

Authors:
Affiliation:
AES Conference:
Paper Number:
Publication Date:
Subject:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society