Immersive audio technologies, ranging from rendering spatialized sounds accurately to efficient room simulations, are vital to the success of augmented and virtual realities. To produce realistic sounds through headphones, the human body and head must both be taken into account. However, the measurement of the influence of the external human morphology on the sounds incoming to the ears, which is often referred to as head-related transfer function (HRTF), is expensive and time-consuming. Several datasets have been created over the years to help researcherswork on immersive audio; nevertheless, the number of individuals involved and amount of data collected is often insufficient for modern machine-learning approaches. Here, the SONICOM HRTF dataset is introduced to facilitate reproducible research in immersive audio. This dataset contains the HRTF of 120 subjects, as well as headphone transfer functions; 3D scans of ears, heads, and torsos; and depth pictures at different angles around subjects' heads.
Engel, Isaac; Daugintis, Rapolas; Vicente, Thibault; Hogg, Aidan O. T.; Pauwels, Johan; Tournier, Arnaud J.; Picinali, Lorenzo
Affiliation: Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, United Kingdom
JAES Volume 71 Issue 5 pp. 241-253; May 2023
Publication Date: May 9, 2023
Download Now (1.0 MB)
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.