The ability to localize sound sources in 3D-space was tested in humans. The subjects listened to noises filtered with subject-specific head-related transfer functions. In the experiment using naïve subjects, the conditions included the type of visual environment (darkness or structured virtual world) presented via head mounted display and pointing method (head and manual pointing). The results show that the errors in the horizontal dimension were smaller when head pointing was used. Manual pointing showed smaller errors in the vertical dimension. Generally, the effect of pointing method was significant but small. The presence of structured virtual visual environment significantly improved the localization accuracy in all conditions. This supports the benefit of using a visual virtual environment in acoustic tasks like sound localization.
Authors:
Goupell, Matthew; Laback, Bernhard; Majdak, Piotr; Mihocic, Michael
Affiliation:
Acoustics Research Institute, Austrian Academy of Sciences
AES Convention:
124 (May 2008)
Paper Number:
7407
Publication Date:
May 1, 2008
Subject:
Psychoacoustics, Perception, and Listening Tests
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.