Personalization of HRTF is essential for spatial sound rendering, for which a possible solution is based on one or more anthropological measures of the subject. Measuring these anthropometrics seamlessly, accurately and reliably is still a challenge. In this paper, we propose a system for obtaining anthropometric measurements, suitable for HRTF personalization, directly from a high-end headphone. The proposed system is multimodal and leverages existing sensors to extract features related to listener’s head dimensions. We propose three signal processing methodologies for three modalities of sensors and a fusion algorithm to aggregate these extracted features for a robust anthropometry estimation. To verify the design we use a data set, collected from 35 subjects. The proposed algorithm achieves a low error (RMSE) of 0.58 - 1.21 cm for human anthropometry estimation.
Authors:
Islam, Md Tamzeed; Tashev,Ivan
Affiliations:
University of North Carolina at Chapel Hill;Microsoft Research(See document for exact affiliation information.)
AES Conference:
2020 AES International Conference on Audio for Virtual and Augmented Reality (August 2020)
Paper Number:
1-7
Publication Date:
August 13, 2020
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.