Exponentially increasing electronic music distribution creates a natural pressure for fine-grained musical metadata. On the basis of the fact that a primary motive for listening to music is its emotional effect, diversion, and the memories it awakens, we propose a novel affective music taxonomy which combines the global music genre taxonomy, e.g. Classical, Jazz, Rock/Pop, and Rap, with emotion categories such as Joy, Sadness, Anger, and Pleasure, in a complementary way. In this paper, we deal with all essential stages of automatic genre/emotion recognition system, i.e. from reasonable music data collection up to performance evaluation of various machine learning algorithms. Particularly, a novel classification scheme, called as consecutive dichotomous decomposition tree (CDDT) is presented which is specifically parametrized for multi-class classification problem with extremely high number of class, e.g. sixteen music categories in our case. The average recognition accuracy of 75% for the 16 music categories shows a realistic possibility of the affective music taxonomy we proposed.
Authors:
Kim, Jonghwa; Larsen, Lars
Affiliation:
University Augsburg, Augsburg, Germany
AES Convention:
128 (May 2010)
Paper Number:
8018
Publication Date:
May 1, 2010
Subject:
Music Analysis and Processing
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.