In an augmented reality environment, real and virtual world audio signals are simultaneously presented to a listener. Virtual sound content and a real sound source should not interfere with each other. Thus, to make this possible, we have examined spatial auditory masking between maskers and maskees, where maskers are real sound signals emitted from loudspeakers, and maskees are virtual sound images generated using head-related transfer functions (HRTFs), emitted from headphones. The experiment was conducted using open-ear headphones, which allows us to hear the environment while listening to the audio content. The results are similar to those of a previous experiment, in which the masker and maskee were both real signals emitted from loudspeakers. Masking threshold levels as a function of maskee locations have a symmetric property to the frontal plane of a subject with a given masker location. However, the masking threshold level is lower than in the previous experiment, perhaps due to HRTFs’ limited ability to localize sound images. The results indicate that, like real sound signals, spatial auditory masking of human hearing occurs with virtually localized sound images.
Authors:
Nishiguchi, Masayuki; Ishihara, Soma; Watanabe, Kanji; Abe, Koji; Takane, Shouichi
Affiliation:
Akita Prefectural University, Atika, Japan
AES Convention:
151 (October 2021)
Paper Number:
10524
Publication Date:
October 13, 2021
Subject:
Evaluation of spatial audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.