Mobile robotic platforms are equipped with multimodal human-like sensing e.g. haptic, vision and audition, in order to collect data from the environment. Recently, robotic binaural hearing approaches based on Head-Related Transfer Functions (HRTFs) have become a promising technique to localize sounds in a threedimensional environment with only two microphones. Usually, HRTF-based sound localization approaches are restricted to one sound source. To cope with this difficulty, Blind Source Separation (BSS) algorithms were utilized to separate the sound sources before applying HRTF localization. However, those approaches usually are computationally expensive and restricted to sparse and statistically independent signals for the underdetermined case. In this paper we present underdetermined sound localization that utilizes a superpositioned HRTF database. Our algorithm is capable of localizing sparse, as well as broadband signals, whereas the signals are not statistically independent.
Authors:
Diepold, Klaus; Kronmüller, David; Rothbucher, Martin; Shen, Hao
Affiliation:
Technische Universität München, Munich, Germany
AES Convention:
129 (November 2010)
Paper Number:
8185
Publication Date:
November 4, 2010
Subject:
Emerging Applications
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.