Head-related transfer functions (HRTF) are used for creating the perception of a virtual sound source at an arbitrary azimuth-elevation. Publicly available databases use a subset of these directions due to physical constraints (viz., loudspeakers for generating the stimuli not being point-sources) and the time required to acquire and deconvolve responses for a large number of spatial directions. In this paper we present a subspace-based technique for reconstructing HRTFs at arbitrary directions for the IRCAM-Listen HRTF database, which comprises a set of HRTFs sampled every 15 deg along the azimuth direction. The presented technique includes first augmenting the sparse IRCAM dataset using the concept of auditory localization blur, then deriving a set of P=6 principal components, using PCA for the original and augmented HRTFs, and then training a neural network (ANN) with these directional principal components. The reconstruction of HRTF corresponding to an arbitrary direction is achieved by post-multiplying the ANN output, comprising the estimated six principal components, with a frequency weighting matrix. The advantage of using a subspace approach, involving only 6 principal components, is to obtain a low complexity HRTF synthesis ANN-based model as compared to training an ANN model to output an HRTF over all frequencies. Objective results demonstrate a reasonable interpolation with the presented approach.
Authors:
Bharitkar, Sunil G.; Mauer, Timothy; Wells, Teresa; Berfanger, David
Affiliations:
HP Labs., Inc., San Francisco, CA, USA; HP, Inc., Vancouver, WA, USA(See document for exact affiliation information.)
AES Convention:
145 (October 2018)
eBrief:476
Publication Date:
October 7, 2018
Subject:
Spatial Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.