One of the distinct features of a 3D image is that the depth perceived by viewer is controlled so that objects appear to project toward the viewers. However, it has been hard to move auditory imagery near to listeners using conventional loudspeakers and panning algorithms. In this study, we proposed a new system for controlling auditory depth, which incorporates two loudspeakers: one that radiates sound from the front of a listener and another that radiates plane waves from above a listener. With additional equalization that removes spectral cues corresponding to elevation, the proposed system generates an auditory image "near a listener" and controls the depth perceived by the listener, thereby enhancing the listener’s perception of 3D sound.
Authors:
Kim, Sungyoung; Okumura, Hiraku; Sakanashi, Hideki; Takurou, Sone
Affiliation:
Yamaha Corporation, Hamamatsu, Japan
AES Convention:
131 (October 2011)
Paper Number:
8484
Publication Date:
October 19, 2011
Subject:
Soundfield Analysis and Reproduction
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.