A method for transferring the expressive musical nuances of real recordings to a MIDI synthesized version was successfully demonstrated. Three features (dynamics, tempo, and articulation) were extracted from the recordings and then applied to the MIDI note list in order to reproduce the performer’s style. Subjective results showed that the retargeted music is very natural and sounds similar to the original performance. Statistical tests confirmed that the output correlated with the original better than with other sources. The method can successfully distinguish among different styles. A variety of applications can use this approach.
Authors:
Lui, Simon; Horner, Andrew; So, Clifford
Affiliations:
Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong; School of Continuing and Professional Studies, Chinese University of Hong Kong, Hong Kong(See document for exact affiliation information.)
JAES Volume 58 Issue 12 pp. 1032-1044; December 2010
Publication Date:
February 3, 2011
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.