In This Section
Perceptual Effects of Dynamic Range Compression in Popular Music Recordings - January 2014
Accurate Calculation of Radiation and Diffraction from Loudspeaker Enclosures at Low Frequency - June 2013
New Measurement Techniques for Portable Listening Devices: Technical Report - October 2013
AES Convention Papers Forum
The Singing Tutor: Expression Categorization and Segmentation of the Singing Voice
Computer evaluation of singing interpretation has traditionally been based exclusively on tuning and tempo. This article presents a tool for the automatic evaluation of singing voice performances that regards on tuning and tempo but also on the expression of the voice. For such purpose, the system performs analysis at note and intra-note levels. Note level analysis outputs traditional note pitch, note onset and note duration information while Intra-note level analysis is in charge of the location and the expression categorization of note’s attacks, sustains, transitions, releases and vibratos. Segmentation is done using an algorithm based on untrained HMMs with probabilistic models built out of a set of heuristic rules. A graphical tool for the evaluation and fine-tuning of the system will be presented. The interface gives feedback about analysis descriptors and rule probabilities.
No AES members have commented on this paper yet.
Subscribe to this discussion
Start a discussion!
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.