Audio is often being overlooked in the face of more attention-grabbing visuals in the quest of creating lifelike scenes in VR environments, where headphones are often used as the playback medium. In order to enhance the immersive audio reproduction over the headphones, individualized head-related transfer function (HRTF) is an inseparable attribute. However, conventional methods to obtain individualized HRTFs require long duration and constraint movements. Here, we developed a platform that enables the user to capture his/her HRTFs using an engaging, fast, and unconstrained approach with on-the-fly processing and explicit visual feedback on the progress of measurement through a head-mounted display (HMD). To facilitate the evaluation of individualized HRTF over non-individualized HRTF, a spatial audio renderer has been released to load HRTF datasets in SOFA format and switch HRTF sets on-the-fly in VR scenes. Both the acquisition system and renderer are developed in Unity to allow for seamless integration.
Authors:
Peksi, Santi; Hai, Nguyen Duy; Ranjan, Rishabh; Gupta, Rishabh; He, Jianjun; Gan, Woon Seng
Affiliations:
Nanyang Technological University, Singapore; Maxim Integrated, San Jose, CA, USA(See document for exact affiliation information.)
AES Conference:
2019 AES International Conference on Headphone Technology (August 2019)
Paper Number:
16
Publication Date:
August 21, 2019
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.