With the evolution of smart headphones, hearables, and hearing aids there is a need for technologies to improve situational awareness. The device needs to constantly monitor the real world events and cue the listener to stay aware of the outside world. In this paper we develop a technique to identify the exact location of the dominant sound source using the unique spectral and temporal features listener’s head-related transfer functions (HRTFs). Unlike most state-of-the-art beamforming technologies, this method localizes the sound source using just two microphones thereby reducing the cost and complexity of this technology. An experimental framework is setup at the EmbodyVR anechoic chamber, and hearing aid recordings are carried out for several different trajectories, SNRs, and turn-rates. Results indicate that the source localization algorithms perform well for dynamic moving sources for different SNR levels.
Authors:
Sunder, Kaushik; Wang, Yuxiang
Affiliations:
Embody VR, Mountain View, CA, USA; University of Rochester, Rochester, NY, USA(See document for exact affiliation information.)
AES Convention:
147 (October 2019)
Paper Number:
10289
Publication Date:
October 8, 2019
Subject:
Spatial Audio, Part 2
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.