Object-based sound reproduction, on the one hand, allows sound engineers to interact with sound objects, not only during production but also in the reproduction venue. On the other hand object-based systems are quite complex. Multicore audio processors are used to render complex sound scenes consisting of hundreds of audio objects to be reproduced using a large number of loudspeaker channels. This results in the need for applications optimally adapted to the user. Working tasks need to be parallelized. This paper outlines a software architecture that helps to incorporate the multitude of audio processing components of an object-based spatial audio environment into a unified system. The architecture allows multiple sound engineers to access, monitor, control, and/or change these system components parameters collaboratively using wireless mobile devices.
Authors:
Gatzsche, Gabriel; Sladeczek, Christoph
Affiliation:
Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
AES Convention:
136 (April 2014)
Paper Number:
9082
Publication Date:
April 25, 2014
Subject:
Spatial Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.