Community

AES Journal Forum

Speech Emotion Recognition for Performance Interaction

Document Thumbnail

This research explores the relevance of machine-driven Speech Emotion Recognition (SER) as a way to augment theatrical performances and interactions, such as controlling stage color/light, stimulating active audience engagement, actors’ interactive training, etc. It is well known that the meaning of a speech utterance arises from more than the linguistic content. Emotional affect can dramatically change meaning. As the basis for classification experiments, the authors developed the Acted Emotional Speech Dynamic Database (AESDD, which contains spoken utterances from 5 actors with 5 emotions. Several audio features and various classification techniques were implemented and evaluated using this database, as well comparing results with the Surrey Audio-Visual Expressed Emotion (SAVEE) database. The training classified was integrated into a novel application that performed live SER, fitting the needs of actor training.

Authors:
Affiliation:
JAES Volume 66 Issue 6 pp. 457-467; June 2018
Publication Date:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society