This paper introduces an adaptive binaural rendering algorithm that renders sound image into a desired location for user-interactive headphone listening. The proposed algorithm provides steady sound localization during the listener's head movement by minimizing both localization error and timbral degradation caused by filtering HRTF. It is achieved by direct-ambient separation of input channel signal and the corresponding HRTF filtering with desired reverberation to the listener's head position. By a set of experiments, it is shown that the proposed algorithm provides precise localization.
Authors:
Jo, Hyun; Park, Jaeha; Son, Sangmo; Kim, Sunmin
Affiliation:
DMC R&D Center, Samsung Electronics Co., Suwon, Gyeonggi-do, Korea
AES Convention:
139 (October 2015)
eBrief:223
Publication Date:
October 23, 2015
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.