Live sounds at a concert have spatial relationships to each other and to their environment. The specific microphone technique used for recording the sounds, the placement and directional properties of the playback loudspeakers, and the room’s response determine the signals at the listener’s ears and thus the rendering of the concert recording. For the frequency range, in which Inter-aural Time Differences dominate directional hearing, a free-field transmission line model will be used to predict the placement of phantom sources between two loudspeakers. Level panning and time panning of monaural sources are investigated. Effectiveness and limitations of different microphone pairs are shown. Recording techniques can be improved by recognizing fundamental requirements for spatial rendering. Observations from a novel 4-loudspeaker setup are presented. It provides enhanced spatial rendering of 2-channel sound.
Author:
Linkwitz, Siegfried
Affiliation:
Linkwitz Lab, Corte Madera, CA, USA
AES Convention:
133 (October 2012)
Paper Number:
8713
Publication Date:
October 25, 2012
Subject:
Spatial Audio Over Loudspeakers
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.