Music Genre Classification is one of the most active tasks in Music Information Retrieval (MIR). Many successful approaches can be found in literature. Most of them are based on Machine Learning algorithms applied to different audio features automatically computed for a specific database. But there is no computational model that explains how musical features are combined in order to yield genre decision in humans. In this work we present a listening experiment where audio has been altered in order to preserve some properties of music (rhythm, harmony, etc) but at the same time degrading other ones. Results are compared with a series of state-of-the-art genre classifiers based on these musical properties and we draw some lessons from that comparison.
Authors:
Guaus, Enric; Herrera, Perfecto
Affiliations:
Music Technology Group; School of Music of Catalonia(See document for exact affiliation information.)
AES Convention:
121 (October 2006)
Paper Number:
6979
Publication Date:
October 1, 2006
Subject:
Psychoacoustics and Perception
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.