Since people regularly use computers for listening, emotion classification is an important part of human-computer interaction, which has various applications in industrial and commercial sectors. This research investigates and compares recognizing vocal emotions by three different classifiers: multiclass support vector machine, Adaboost, and random forests. The decisions of these classifiers are then combined using majority voting. The proposed method has been applied to two different emotional databases: the Surrey Audio-Visual Expressed Emotion (SAVEE) Database and the Polish Emotional Speech Database. A vector of 14 features was used in order to recognize seven basic emotions from the SAVEE database and six emotions form the Polish database. Features extracted from these databases include pitch, intensity, first through fourth formants and their bandwidths, mean autocorrelation, mean noise-to-harmonic ratio, and standard deviation. Recognition rates ranged from 71 to 87%.
Authors:
Noroozi, Fatemeh; Kaminska, Dorota; Sapinski, Tomasz; Anbarjafari, Gholamreza
Affiliations:
Institute of Technology, University of Tartu, Estonia; Institute of Mechatronics and Information Systems, Lodz University of Technology, Poland; iCV Research Group, Institute of Technology, University of Tartu, Estonia; Department of Electrical and Electronic Engineering, Hasan Kalyoncu University, Gaziantep, Turkey(See document for exact affiliation information.)
JAES Volume 65 Issue 7/8 pp. 562-572; July 2017
Publication Date:
August 15, 2017
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.