We propose a differentiable WORLD synthesizer and demonstrate its use in end-to-end audio style transfer tasks such as (singing) voice conversion and the DDSP timbre transfer task. Our baseline differentiable synthesizer has no model parameters, yet it yields adequate synthesis quality. We extend the baseline synthesizer by appending lightweight black-box postnets applying further processing in order to improve fidelity. An alternative differentiable approach relies on the extraction of the source excitation spectrum directly, and results in improved naturalness, albeit for a narrower class of style transfer applications. The acoustic feature parameterization used by our approaches has the benefit that it naturally disentangles pitch and timbral information so that they can be modeled separately. Moreover, as there exists a robust means of estimating these acoustic features from monophonic audio sources, it enables new training configurations and allows for parameter loss terms to be added to an end-to-end objective function, which can help convergence and/or further stabilize (adversarial) training.
Author:
Nercessian, Shahan
Affiliation:
iZotope, Inc, Cambridge, MA, USA
AES Convention:
154 (May 2023)
Paper Number:
10661
Publication Date:
May 13, 2023
Subject:
Music AI
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this Music AI yet.
To be notified of new comments on this Music AI you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this Music AI then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.