Auralisations with HRTFs are an innovative tool for the reproduction of acoustic space. Their broad applicability depends on the use of non-individualised models, but little is known on how humans adapt to these sounds. Previous findings have shown that simple exposure to non-individualised virtual sounds did not provide a quick adaptation, but that training and feedback would boost this process. Here, we were interested in analyzing the long-term effect of such training-based adaptation. We trained listeners in azimuth and elevation discrimination in two separate experiments and retested them immediately, one hour, one day, one week and one month after. Results revealed that, with active learning and feedback, all participants lowered their localization errors. This benefit was still found one month after training. Interestingly, participants who had trained previously with elevations were better in azimuth localization and vice-versa. Our findings suggest that humans adapt easily to new anatomically shaped spectral cues and they are able to transfer that adaptation to non-trained sounds.
Mendonça, Catarina; Santos, Jorge A.; Campos, Guilherme; Dias, Paulo; Vieira, José
Affiliations: University of Minho, Minho, Portugal; University of Aveiro, Aveiro, Portugal(See document for exact affiliation information.)
AES Conference: 45th International Conference: Applications of Time-Frequency Processing in Audio (March 2012)
Paper Number: 4-2
Publication Date: March 1, 2012
Subject: Spatial Sound
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.