A real-time synthesis engine is presented which models and predicts the "timbre" of different acoustic instruments based on perceptual features. The paper describes the modeling sequence including the analysis of natural sounds, the inference step that finds the mapping between control and output parameters, the timbre prediction step, and the sound synthesis. Demonstrations include the timbre synthesis of stringed instruments and the singing voice, as well as the cross-synthesis and timbre morphing between these instruments.
Authors:
Schoner, Bernd; Jehan, Tristan
Affiliation:
MIT Media Laboratory, Cambridge, MA
AES Convention:
110 (May 2001)
Paper Number:
5328
Publication Date:
May 1, 2001
Subject:
Analysis and Synthesis of Sound
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.