This paper presents a general framework for using appropriately structured information about audio recordings in music processing, and shows how this framework can be utilised in multitrack music production tools. The information, often referred to as metadata, is commonly represented in a highly domain and application specific format. This prevents interoperability and its ubiquitous use across applications. In this paper, we address this issue. The basis for the formalism we use is provided by Semantic Web ontologies rooted in formal logic. A set of ontologies are used to describe structured representation of information such as tempo, the name of instruments or onset times extracted from audio. This information is linked to audio tracks in music production environments as well as processing blocks such as audio effects. We also present specific case studies, for example, the use of audio effects capable of processing and predicting metadata associated with the processed signals. We show how this increases the accuracy of description, and reduces the computational cost, by omitting repeated application of feature extraction algorithms.
Authors:
Fazekas, Gyorgy; Wilmering, Thomas; Sandler, Mark
Affiliation:
Queen Mary University of London, London, UK
AES Conference:
42nd International Conference: Semantic Audio (July 2011)
Paper Number:
8-2
Publication Date:
July 22, 2011
Subject:
Intelligent Audio Effects
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.