Using nonindividualized HRTFs in virtual audio synthesis produces front-back confusions, up-down reversals, in-head localization, and timbral coloration. Elevation and frontal localization are found to be most affected. In contrast, obtaining individualized HRTFs is a tedious process that involves complex acoustical measurements for each individual. Having a model of HRTF that does not involve tedious acoustical measurements would make the process much easier. In this research, individualization of the median plane HRTFs is explored using frontal projection headphones with a spherical head model because the frontal positioning of the headphone transducer inherently captures the idiosyncratic frontal spectral cues. To create the HRTFs, the important peaks (P1) and notches (N1, N2) are extracted first from the frontal headphone response and then shifted in frequency in accordance with the elevation angle. Detailed subjective experiments indicated that subjects were able to localize the virtual sound sources accurately with modeled HRTFs with results similar to individualized HRTFs.
Authors:
Sunder, Kaushik; Gan, Woon-Seng
Affiliation:
Digital Signal Processing Lab, School of EEE, Nanyang Technological University, Singapore
JAES Volume 64 Issue 12 pp. 1026-1041; December 2016
Publication Date:
December 27, 2016
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.