A music information retrieval system can extract information that arises from how various sound sources are panned between channels during the mixing and recording process. The authors propose augmenting standard audio features, which are based on the source music, with one of two methods for extracting panning and contrast features. These additional features provide statistically important information for nontrivial audio classifications tasks. Traditional classifications focus on information about pitch, rhythm, and timbre. Other types of mixing parameters are proposed for future work.
Authors:
Tzanetakis, George; Martins, Luis Gustavo; McNally, Kirk; Jones, Randy
Affiliations:
Department of Computer Science, University of Victoria, Victoria, Canada; Portuguese Catholic University, Research Center for Science and Technology in the Arts, Porto, Portugal; School of Music, University of Victoria, Victoria, Canada; Madrona Labs, Seattle, WA, USA(See document for exact affiliation information.)
JAES Volume 58 Issue 5 pp. 409-417; May 2010
Publication Date:
June 8, 2010
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this report yet.
To be notified of new comments on this report you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this report then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.