Directional Audio Coding (DirAC) is a perceptually motivated microphone technique that models the sound field as a combination of a plane wave and a surrounding diffuse field with a time–frequency resolution that approximates that of the human spatial hearing. In this paper a recently proposed covariance domain spatial-sound rendering method was applied to optimize the DirAC reproduction by minimizing the amount of the decorrelated sound energy. When several semi-independent microphone signals were available, this procedure was shown to improve the overall perceived sound quality, especially with audio content that has an impulsive fine structure, such as applause and speech. In all tests, the covariance rendering method performed similarly or better than the legacy rendering method, making it the preferred choice for performing DirAC synthesis.
Authors:
Vilkamo, Juha; Pulkki, Ville
Affiliation:
Aalto University, Espoo, Finland
JAES Volume 61 Issue 9 pp. 637-646; September 2013
Publication Date:
October 1, 2013
Download Now (477 KB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.