In two-channel or stereo applications, such as for televisions, automotive infotainment, and hi-fi systems, the speakers are typically placed substantially close to each other. The sound field generated from such a setup creates an image that is perceived as monophonic while lacking sufficient spatial ``presence'. Due to this limitation, a stereo expansion technique may be utilized to widen the soundstage to give the perception to listener(s) that sound is origination from a wider angle (e.g., +/- 30 degrees relative to the median plane) using head-related-transfer functions (HRTF's). In this paper, we propose extensions to the headmodel (viz., the ipsilateral and contralateral headshadow functions) based on analysis of the diffraction of sound around head cephalometric features, such as the nose, whose dimensions are of the order to cause variations in the headshadow responses in the high-frequency region. Modeling these variations is important for accurate rendering of a spatialized sound-field for 3-D audio applications. Specifically, this paper presents refinements to the existing spherical head-models for spatial audio applications.
Authors:
Bharitkar, Sunil; Gislason, Pall
Affiliations:
Audyssey Labs.; University of Southern California(See document for exact affiliation information.)
AES Convention:
123 (October 2007)
Paper Number:
7280
Publication Date:
October 1, 2007
Subject:
Signal Processing for 3-D Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.