To enable future audio systems to be more immersive, interactive, and easily accessible, object-based frameworks are currently being explored as a means to that ends. In object-based audio, a scene is composed of a number of objects, each comprising audio content and metadata. The metadata is interpreted by a renderer, which creates the audio to be sent to each loudspeaker with knowledge of the speci?c target reproduction system. While recent standardization activities provide recommendations for object formats, the method for capturing and reproducing reverberation is still open. This research presents a parametric approach for capturing, representing, editing, and rendering reverberation over a 3D spatial audio system. A Reverberant Spatial Audio Object (RSAO) allows for an object to synthesize the required reverberation. An example illustrates a RSAO framework with listening tests that show how the approach correctly retains the room size and source distance. An agnostic rendering can be used to alter listener envelopment. Editing the parameters can also be used to alter the perceived room size and source distance; greater envelopment can be achieved with the appropriate reproduction system.
Authors:
Coleman, Philip; Franck, Andreas; Jackson, Philip J. B.; Hughes, Richard J.; Remaggi, Luca; Melchior, Frank
Affiliations:
Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, Surrey, UK; Institute of Sound and Vibration Research, University of Southampton, Southampton, Hampshire, UK; Acoustics Research Centre, University of Salford, Salford, UK; BBC Research and Development, Dock House, MediaCityUK, Salford, UK(See document for exact affiliation information.)
JAES Volume 65 Issue 1/2 pp. 66-77; January 2017
Publication Date:
February 16, 2017
Download Now (500 KB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.