In many studies of localizing 3D sound via headphones, static sounds and individualized HRTFs were used. When the dynamic cue and nonindividualized HRTFs are used, 3D sound presentation time for accurate localization has been interesting, because moving the head toward the direction of the sound will take time. This paper presents an experimental study on subject's reaction time for localizing the 3D sound presented via headphones. There were 31 volunteers (16 males and 15 females). The experiment was conducted in a noise-isolated chamber. Huron Lake CP4 system was used to generate the 3D sound and Flock of birds motion tracking system was used to introduce the dynamic cue for localization and registration of head movement. The sound of coin-drops with a bandwidth of 0 Hz to 11,500 Hz was applied as the testing sound. The sound was presented at any possible position on the horizontal plane with constant distance to the center of the head. The subjects were indicated to move their body to localize the sound as soon as possible by pointing the midline of the head toward the right direction of the sound with ±10° azimuths accuracy. The results showed that for most of the subjects, it takes less than 11 seconds to localize the sound. The shortest average reaction time is 5.8 sec for female and 5.3 sec for male subjects; the longest reaction time for female is 32 sec and 42 sec for male subjects. Large individual differences were found and this may due to the reason of generic HRTFs used in the experiment. Localization adaptation was found in the experiment. It happened after longer time exposure to the same sound stimulation from the same location. Present study cannot identify the exposure time for developing localization adaptation.
Author:
Chen, Fang
Affiliation:
Swedish Center for Human Factors in Aviation,Linköping University, Linköping, Sweden
AES Conference:
22nd International Conference: Virtual, Synthetic, and Entertainment Audio (June 2002)
Paper Number:
000257
Publication Date:
June 1, 2002
Subject:
Virtual, Synthetic and Entertainment Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.