We propose an interactive algorithm that musically accompanies musicians based on the matching of expressive feature patterns to existing archive recordings. For each accompany music segment, multiple realizations with different musical characteristics are performed by master music performers and recorded. Musical expressive features are extracted from each accompany segment and its semantic analysis is obtained using music expressive language model. When the performance of system user is recorded, we extract and analyze musical expressive feature in real time and playback the accompany track from the archive database that best matches the expressive feature pattern. By creating a sense of musical correspondence, our proposed system provides exciting interactive musical communication experience and finds versatile entertainment and pedagogical applications.
Authors:
Bocko, Gregory; Bocko, Mark F.; Headlam, Dave;Lundberg, Justin; Ren, Gang
Affiliations:
Dept. of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA; Dept. of Music Theory, University of Rochester, Rochester, NY, USA(See document for exact affiliation information.)
AES Convention:
129 (November 2010)
Paper Number:
8256
Publication Date:
November 4, 2010
Subject:
Signal Analysis and Synthesis
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.