Despite the importance of acoustical diffraction in out natural environment, modeling of such effects is complex and computationally expensive for all but trivial environments and therefore, typically ignored in virtual reality and gaming applications altogether. Driven by the gaming industry, consumer computer graphics hardware and the graphics processing unit (GPU) in particular, has greatly advanced in recent years, outperforming the computational capacity of central processing units (CPUs). Given the widespread use and availability of computer graphics hardware, GPUs have been successfully applied to other, non-graphics applications including audio processing and acoustical diffraction modeling. Here we build upon an existing GPU-based acoustical occlusion/diffraction modeling method that can become problematic when the sound source and the listener are in separate rooms. The proposed method approximates acoustical occlusion/diffraction effects for complex, multi-room environments. The method is computationally efficient allowing it to be incorporated into real-time, dynamic, and interactive virtual environments and videogames where the scene is arbitrarily complex.
Authors:
Cowan, Brent; Kapralos, Bill
Affiliation:
University of Ontario Institute of Technology, Oshawa, Ontario, Canada
AES Conference:
41st International Conference: Audio for Games (February 2011)
Paper Number:
3-2
Publication Date:
February 2, 2011
Subject:
Game Reverb
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.