Music emotion recognition typically attempts to map audio features from music to a mood representation using machine learning techniques. In addition to having a good dataset, the key to a successful system is choosing the right inputs and outputs. Often, the inputs are based on a set of audio features extracted from a single software library, which may not be the most suitable combination. This paper describes how 47 different types of audio features were evaluated using a five-dimensional support vector regressor, trained and tested on production music, in order to find the combination which produces the best performance. The results show the minimum number of features that yield optimum performance, and which combinations are strongest for mood prediction.
Authors:
Baume, Chris; Fazekas, György; Barthet, Mathieu; Marston, David; Sandler, Mark
Affiliations:
BBC R&D, London, UK; Queen Mary University of London, London, UK(See document for exact affiliation information.)
AES Conference:
53rd International Conference: Semantic Audio (January 2014)
Paper Number:
P1-3
Publication Date:
January 27, 2014
Subject:
Machine Learning Methods for Audio Content Analysis
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.