Over the past decade, audio for extended reality has become critical to deliver a truly immersive sound experience. With headphones being a popular medium for playback, binaural audio is one of the most convenient formats to deliver accurate spatial audio. Personalized Head-related Transfer Functions (HRTFs) are an integral component of binaural audio that determines the quality of the spatial audio experience. In this paper, we present a pilot research that predicts personalized HRTFs based on 2D images or a video capture. We explore different components in this process including the 3D reconstruction of an ear based on 2D images or video followed by the HRTF estimation using HRTF prediction using Neural Networks.
Authors:
Javeri, Nikhil; Dutta, Prabal Bijoy; Sunder, Kaushik; Jain, Kapil
Affiliation:
Embody, San Mateo, CA, USA
AES Conference:
2022 AES International Conference on Audio for Virtual and Augmented Reality (August 2022)
Paper Number:
26
Publication Date:
August 15, 2022
Subject:
Paper
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.