Digital audio effects, such as adding artificial reverberation, are actually transformations on an audio signal, where the transformation depends on a set of control parameters. Users change parameters over time based on the resulting perceived sound. This research simulates the process of automating the parameters using supervised learning to train classifiers so that they automatically assign effect parameter sets to audio features. Training can be done a-priori, as for example, by an expert user of the reverberation effects, or online by the user of such an effect. An automatic reverberator trained on a set of audio is expected to be able to apply reverberation correctly on similar audio defined by such properties as timbre, tempo, etc. For this reason, in order to create a reverberation effect that is as general as possible, training requires a large and diverse set of audio data. In one investigation, the user provides monophonic examples of desired reverberation characteristics for individual tracks taken from the Open Multitrack Testbed. This data was used to train a set of models that will automatically apply reverberation to similar tracks. The model was evaluated using classifier f1-scores, mean squared errors, and multistimulus listening tests. The best features from a 31-dimensional feature space were used.
Authors:
Chourdakis, Emmanouil T.; Reiss, Joshua D.
Affiliation:
Queen Mary University of London, Mile End Road, London, UK
JAES Volume 65 Issue 1/2 pp. 56-65; January 2017
Publication Date:
February 16, 2017
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.