In multichannel sound reproduction, recreating virtual sources or sound images in different directions can be realized with the principle of summing localization of multiple loudspeakers or real sound sources. On the basis of Wallach’s hypothesis that the variations in the interaural time difference caused by head turning provide dynamic cues for front-back and vertical localization, the present study develops a framework for analyzing the vertical summing localization of multichannel sound reproduction with amplitude panning. The previously derived localization equations, which were based on the simplified shadow-less head model, are reviewed and psychoacoustics explanations are provided. A HRTF-based method for analyzing the vertical summing localization more strictly is described. Based on the proposed framework and method, vertical summing localizations for pair-wise amplitude panning in the median plane and Ambisonics are analyzed. The results confirm previous observations that for some appropriate loudspeaker configurations, pair-wise amplitude panning is able to recreate a virtual source between two loudspeakers in the median plane.
Authors:
Xie, Bosun; Mai, Haiming; Rao, Dan; Zhong, Xiaoli
Affiliations:
State Key Laboratory of Subtropical Building Science, South China University of Technology, Guangzhou, China; Acoustic Lab., School of Physics and Optoelectronics, South China University of Technology, Guangzhou, China; Acoustic Lab., School of Physics and Optoelectronics, South China University of Technology, Guangzhou, China; Acoustic Lab., School of Physics and Optoelectronics, South China University of Technology, Guangzhou, China(See document for exact affiliation information.)
JAES Volume 67 Issue 6 pp. 382-399; June 2019
Publication Date:
June 9, 2019
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.