Object-based audio can be used to customize, personalize, and optimize audio reproduction depending on the speci?c listening scenario. To investigate and exploit the bene?ts of object-based audio, a framework for intelligent metadata adaptation was developed. The framework uses detailed semantic metadata that describes the audio objects, the loudspeakers, and the room. It features an extensible software tool for real-time metadata adaptation that can incorporate knowledge derived from perceptual tests and/or feedback from perceptual meters to drive adaptation and facilitate optimal rendering. One use case for the system is demonstrated through a rule-set (derived from perceptual tests with experienced mix engineers) for automatic adaptation of object levels and positions when rendering 3D content to two- and ?ve-channel systems.
Woodcock, James; Franombe, Jon; Franck, Andreas; Coleman, Philip; Hughes, Richard; Kim, Hansung; Liu, Qingju; Menzies, Dylan; Simón Gálvez, Marcos F; Tang, Yan; Brookes, Tim; Davies, William J.; Fazenda, Bruno M.; Mason, Russell; Cox, Trevor J.; Fazi, Filippo Maria; Jackson, Phiip J. B.; Pike, Chris; Hilton, Adrian
Affiliations: University of Salford, Salford, UK; BBC Research and Development, Salford, UK; University of Surrey, Guildford, UK; University of Southampton, Southampton, UK(See document for exact affiliation information.)
AES Conference: 2018 AES International Conference on Spatial Reproduction - Aesthetics and Science (July 2018)
Paper Number: P11-3
Publication Date: July 30, 2018
Session Subject: object-based audio; intelligent rendering; producer intent
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.