Community

AES Convention Papers Forum

Perception-Based Room Rendering for Auditory Scenes

Document Thumbnail

A new rendering algorithm is introduced, which allows modeling a given room parameterized by a set of perceptual parameters. The processing cost and memory requirements are minimized; and the system is capable of : reproducing a large number of sound sources and independently processing many different listening positions. Rather than independently reproducing a large number of reflections (as in mirror-image rendering or ray tracing), sets of reflections are combined in a simple statistical representation of direction of incidence, diffuseness, absorption, etc. For all perceptual parameters, a statistical representation is defined, which can be easily used to reproduce impulse responses for any number of reproduction channels from 2 to nth. For a high number of reproduction channels, wave-field synthesis techniques can be used to reproduce a complete sound field, rather than a sweet spot-based perception for one listening position.:

Author:
Affiliation:
AES Convention: Paper Number:
Publication Date:
Subject:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society