Virtual audio synthesis and playback through headphones by its virtue have several limitations, such as the front-back confusion and in-head localization of the sound presented to the listener. Use of non-individual head related transfer functions (HRTFs) further increases these front-back confusion and degrades the virtual auditory image. In this paper we present a method for customizing non-individual HRTFs by embedding personal cues using the distinctive morphology of the individual’s ear. In this paper we study the frontal projection of sound using headphones to reduce the front-back confusion in 3-D audio playback. Additional processing blocks, such as decorrelation and front-back biasing are implemented to externalize and control the auditory depth of the frontal image. Subjective tests are conducted using these processing blocks, and its impact to localization is reported in this paper.
Authors:
Sunder, Kaushik; Tan, Ee-Leng; Gan, Woon-Seng
Affiliation:
Nanyang Technological University, Singapore, Singapore
AES Convention:
133 (October 2012)
Paper Number:
8760
Publication Date:
October 25, 2012
Subject:
Spatial Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.