Wave Field Synthesis enables the reproduction of complex auditory scenes and moving sound sources. Moving sound sources induce time-variant delay of source signals. To avoid severe distortions, sophisticated delay interpolation techniques must be applied. The typically large numbers of both virtual sources and loudspeakers in a WFS system result in a very high number of simultaneous delay operations, thus being a most performance-critical aspect in a WFS rendering system. In this article, we investigate suitable delay interpolation algorithms for WFS. To overcome the prohibitive computational cost induced by high-quality algorithms, we propose a computational structure that achieves a significant complexity reduction through a novel algorithm partitioning and efficient data reuse.
Authors:
Franck, Andreas; Brandenburg, Karlheinz; Richter, Ulf
Affiliations:
Fraunhofer IDMT; HTWK Leipzig(See document for exact affiliation information.)
AES Convention:
125 (October 2008)
Paper Number:
7613
Publication Date:
October 1, 2008
Subject:
Spatial Audio Processing
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.