Adaptive Digital Audio Effects are sound transformations controlled by features extracted from the sound itself. Artificial reverberation is used by sound engineers in the mixing process for a variety of technical and artistic reasons, including to give the perception that it was captured in a closed space. We propose a design of an adaptive digital audio effect for artificial reverberation that allows it to learn from the user in a supervised way. We perform feature selection and dimensionality reduction on features extracted from our training data set. Then a user provides examples of reverberation parameters for the training data. Finally, we train a set of classifiers and compare them using 10-fold cross validation to compare classification success ratios and mean squared errors. Tracks from the Open Multitrack Testbed are used in order to train and test our models.
Chourdakis, Emmanouil Theofanis; Reiss, Joshua D.
Affiliation: Queen Mary University of London, London, UK
AES Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech) (January 2016)
Paper Number: 9-2
Publication Date: January 27, 2016
Subject: Paper Session 9
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.