Parametric spatial audio rendering aims to provide perceptually convincing audio cues that are agnostic to the playback system to enable the acoustic design of games and virtual reality. The authors propose an algorithm for detecting perceptually important reflections from spatial room impulse responses. First, a parametric representation of the sound field is derived based on perceptually motivated spatio-temporal windowing, followed by a second step that estimates the perceptual salience of the detected reflections by means of a masking threshold. In this work, a vertical dependency is incorporated into both these components. This was inspired by recent research revealing that two sound sources in the median plane can evoke two independent auditory events if their spatial separation is sufficiently large. The proposed algorithm is evaluated in nine simulated shoebox rooms with a wide range of sizes and reverberation times. Evaluation results show improved selection of early reflections by accounting for source elevation and suggest that for speech signals, the perceptual quality increases with an increasing number of rendered early reflections.
Authors:
Jüterbock, Tobias; Brinkmann, Fabian; Gamper, Hannes; Raghuvanshi, Nikunj; Weinzierl, Stefan
Affiliations:
Audio Communication Group, Technical University of Berlin, Berlin, Germany; Audio Communication Group, Technical University of Berlin, Berlin, Germany; Microsoft Research Redmond, Redmond, WA; Microsoft Research Redmond, Redmond, WA; Audio Communication Group, Technical University of Berlin, Berlin, Germany(See document for exact affiliation information.)
JAES Volume 71 Issue 10 pp. 664-678; October 2023
Publication Date:
October 10, 2023
Download Now (979 KB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.