This work is licensed under a
Creative Commons Attribution
4.0 International License.
Object-based audio (OBA) is an approach to sound storage, transmission, and reproduction whereby individual audio objects contain associated metadata information that is rendered at the client side of the broadcast chain. For example, metadata may indicate the object’s position and the level or language of a dialogue track. An experiment was conducted to investigate how content creators perceive changes in perceptual attributes when the same content is rendered to different systems and how they would change the mix if they had control of it. The main aims of this experiment were to identify a small number of the most common mix processes used by sound designers when mixing object-based content to loudspeaker systems with different numbers of channels and to understand how the perceptual attributes of OBA content changes when it is rendered to different systems. The goal is to minimize perceived changes in the context of standard Vector Base Amplitude Panning and matrix-based downmixes. Text mining and clustering of the content creators’ responses revealed 6 general mix processes: the spatial spread of individual objects, EQ and processing, reverberation, position, bass, and level. Logistic regression models show the relationships between the mix processes, perceived changes in perceptual attributes, and the rendering method/speaker layout. The relative frequency of different mix processes was found to differ among categories of audio object, suggesting that any downmix rules should be object category specific. These results give insight into how OBA can be used to improve listener experience.
Woodcock, James; Davies, William J.; Melchior, Frank; Cox, Trevor J.
Affiliations: University of Salford, Salford, United Kingdom; BBC R&D, Dock House, MediaCityUK, Salford, United Kingdom(See document for exact affiliation information.)
JAES Volume 66 Issue 1/2 pp. 44-59; January 2018
Publication Date: February 14, 2018
Download Now (394 KB)
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.