In This Section
Sound Board: Food for Thought, Aesthetics in Orchestra Recording - April 2015
Audibility of a CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback - September 2007
Reflecting on Reflections - June 2014
AES Conference Papers Forum
GPU-Based Acoustical Diffraction Modeling for Complex Virtual Reality and Gaming Environments
Despite the importance of acoustical diffraction in out natural environment, modeling of such effects is complex and computationally expensive for all but trivial environments and therefore, typically ignored in virtual reality and gaming applications altogether. Driven by the gaming industry, consumer computer graphics hardware and the graphics processing unit (GPU) in particular, has greatly advanced in recent years, outperforming the computational capacity of central processing units (CPUs). Given the widespread use and availability of computer graphics hardware, GPUs have been successfully applied to other, non-graphics applications including audio processing and acoustical diffraction modeling. Here we build upon an existing GPU-based acoustical occlusion/diffraction modeling method that can become problematic when the sound source and the listener are in separate rooms. The proposed method approximates acoustical occlusion/diffraction effects for complex, multi-room environments. The method is computationally efficient allowing it to be incorporated into real-time, dynamic, and interactive virtual environments and videogames where the scene is arbitrarily complex.
No AES members have commented on this paper yet.
Subscribe to this discussion
Start a discussion!
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.