Instrument sound synthesis using deep neural networks has received numerous improvements over the last couple of years. Among them, the Differentiable Digital Signal Processing (DDSP) framework has modernized the spectral modeling paradigm by including signal-based synthesizers and effects into fully differentiable architectures. The present work extends the applications of DDSP to the task of polyphonic sound synthesis, with the proposal of a differentiable piano synthesizer conditioned on MIDI inputs. The model architecture is motivated by high-level acoustic modeling knowledge of the instrument, which, along with the sound structure priors inherent to the DDSP components, makes for a lightweight, interpretable, and realistic-sounding piano model. A subjective listening test has revealed that the proposed approach achieves better sound quality than a state-of-the-art neural-based piano synthesizer, but physical-modeling-based models still hold the best quality. Leveraging its interpretability and modularity, a qualitative analysis of the model behavior was also conducted: it highlights where additional modeling knowledge and optimization procedures could be inserted in order to improve the synthesis quality and the manipulation of sound properties. Eventually, the proposed differentiable synthesizer can be further used with other deep learning models for alternative musical tasks handling polyphonic audio and symbolic data.
Renault, Lenny; Mignot, Rémi; Roebel, Axel
Affiliations: STMS - UMR9912, IRCAM, Sorbonne Université, CNRS, Ministére de la Culture, Paris, France; STMS - UMR9912, IRCAM, Sorbonne Université, CNRS, Ministére de la Culture, Paris, France; STMS - UMR9912, IRCAM, Sorbonne Université, CNRS, Ministére de la Culture, Paris, France(See document for exact affiliation information.)
JAES Volume 71 Issue 9 pp. 552-565; September 2023
Publication Date: September 13, 2023
Download Now (773 KB)
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.