Community

AES Journal Forum

Supervised Vocal-Based Emotion Recognition Using Multiclass Support Vector Machine, Random Forests, and Adaboost

Document Thumbnail

Since people regularly use computers for listening, emotion classification is an important part of human-computer interaction, which has various applications in industrial and commercial sectors. This research investigates and compares recognizing vocal emotions by three different classifiers: multiclass support vector machine, Adaboost, and random forests. The decisions of these classifiers are then combined using majority voting. The proposed method has been applied to two different emotional databases: the Surrey Audio-Visual Expressed Emotion (SAVEE) Database and the Polish Emotional Speech Database. A vector of 14 features was used in order to recognize seven basic emotions from the SAVEE database and six emotions form the Polish database. Features extracted from these databases include pitch, intensity, first through fourth formants and their bandwidths, mean autocorrelation, mean noise-to-harmonic ratio, and standard deviation. Recognition rates ranged from 71 to 87%.

Authors:
Affiliations:
JAES Volume 65 Issue 7/8 pp. 562-572; July 2017
Publication Date:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society