With the increasing demand for AR/VR technologies, enabling accurate reproduction of binaural spatial audio through obtaining individualized Head Related Transfer Functions (HRTFs) has become a high priority subject of research. Meanwhile, recent developments in Generative AI have been providing substantial success in several domains involving audio, language, images etc. In this work we propose a framework to use a 3D Convolutional Neural Network (CNN) based Vector-Quantized Variational AutoEncoder (VQ-VAE) to first learn a regularized latent representation from the HRTFs, which leverages both spatial and spectral correlations between neighboring magnitude HRTFs. We further use the Transformer architecture to find mappings between latent sequences derived from spatially-sparse HRTF measurements and the latent sequences defining the HRTFs having a high spatial resolution. We thereby predict HRTFs at 1440 locations given sparse HRTF measurements from 25 locations, also allowing for freedom over the sampling locations of the sparse HRTFs. We achieve a mean Log Spectral Distortion (LSD) error of 4.5 dB while also demonstrating a contrived but informative case of obtaining a mean LSD of 3 dB when evaluated over 10 validation subjects.
Authors:
Zurale, Devansh; Dubnov, Shlomo
Affiliations:
UC San Diego; UC San Diego(See document for exact affiliation information.)
AES Conference:
AES 2023 International Conference on Spatial and Immersive Audio (August 2023)
Paper Number:
16
Publication Date:
August 23, 2023
Subject:
Paper
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.