In this paper, an algorithmic approach towards computing quantifiable metrics regarding HRTF spectral magnitude synthesis performance of virtual sound systems, such as those used in VR/AR/MR environments, is presented. Utilizing regularized regression in parallel with a statistical information theory technique, the system provides a detailed analysis of a virtual spatializer’s spectral magnitude rendering accuracy at a given point in space. Applying the proposed system to the final signal processing stage of a spatial audio rendering pipeline enables the engineer to establish critical performance quantities for benchmarking future modifications to the rendering channel against. The proposed system demonstrates an important step towards standardizing and automating virtual audio system evaluation and may ultimately act as a participant substitute during critical listening tasks.
Authors:
Crawford, Steven; Audfray, Rémi; Jot, Jean-Marc
Affiliations:
University of Rochester; Magic Leap(See document for exact affiliation information.)
AES Conference:
2020 AES International Conference on Audio for Virtual and Augmented Reality (August 2020)
Paper Number:
1-6
Publication Date:
August 13, 2020
Download Now (1.5 MB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.