AES Convention Papers Forum

Automatic Localization of a Virtual Sound Image Generated by a Stereophonic Configuration

Document Thumbnail

Sound localization systems aim at providing the position of a particular sound source as perceived by the human auditory system. Interaural level difference, interaural time difference, and spectral representations of the binaural signals are the main cues adopted for localization. When two sound sources are simultaneously active, a virtual source is created. In this paper a novel approach is presented to provide the human perception of a sound image created by two loudspeakers. The solution is based on both frequency-dependent binaural and monaural cues in order to consider the human auditory system sensitivity to spatial sound localization. Experimental results proved the effectiveness of the proposed approach in correctly estimating the horizontal and vertical position of the virtual source.

AES Convention: Paper Number:
Publication Date:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society