In this paper, a method of noise-robust speech emotion recognition under music noises is proposed by using a denoising autoencoder (DAE) and a support vector machine (SVM). The proposed method first trains a DAE by using emotional speech signals corrupted by music noises. Then, the output values from a middle layer of the DAE are used as speech features. Next, an SVM is trained to classify emotions using the DAE features. The performance of the proposed method is compared with that of a conventional SVM classifier. Consequently, it is shown that the proposed method relatively improves the overall emotion recognition rate by 9.76% under music noise conditions, compared to the conventional method.
Authors:
Ha, Hun Kyu; Kim, Nam Kyun; Seong, Woo Kyeong; Kim, Hong Kook
Affiliation:
Gwangju Institute of Science and Technology (GIST), Gwangju, Korea
AES Convention:
140 (May 2016)
eBrief:260
Publication Date:
May 26, 2016
Subject:
eBriefs: Lectures
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.