Audio production, typically involving the use of tools such as equalisers and reverberators, can be challenging for non-expert users due to the intricate parameters inherent in these tools’ interfaces. In this paper, we present an end-to-end neural audio effects model based on the temporal convolutional network (TCN) architecture which processes equalisation based on descriptive terms sourced from a crowdsourced vocabulary of word labels for audio effects, enabling users to communicate their audio production objectives with ease. This approach enables users to express their audio production objectives in descriptive language (e.g., "bright," "muddy," "sharp") rather than relying on technical terminology that may not be intuitive to untrained users. We experimented with two word embedding methods to steer the TCN to produce the desired output. Real-time performance is achieved through the use of TCNs with sparse convolutional kernels and rapidly growing dilations. Objective metrics demonstrate the efficacy of the proposed model in applying the appropriately parameterized effects to audio tracks.
Authors:
Balasubramaniam, Dharanipathi Rathna Kumar; Timoney, Joseph
Affiliations:
Maynooth University; Maynooth University(See document for exact affiliation information.)
AES Convention:
155 (October 2023)
Paper Number:
10681
Publication Date:
October 25, 2023
Subject:
Signal Processing
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this Signal Processing yet.
To be notified of new comments on this Signal Processing you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this Signal Processing then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.