This paper describes a machine learning approach to the problem of identifying professional musicians from their playing style. We focus on the identification of jazz saxophonists by studying how they express and communicate their view of the musical and emotional content of musical pieces (performed from a musical score). In particular, we investigate expressive deviations of parameters such as pitch, timing, amplitude and timbre in monophonic audio recordings. We describe how we extract a symbolic description from the audio recordings and how we use this symbolic description to train a performance-based interpreter classifier.
Authors:
Maestre, Esteban; Pertusa, Antonio; Ramirez, Rafael
Affiliations:
Departamento de Lenguajes y Sistemas Informticos, Alicante University; Music Technology Group, Pompeu Fabra University(See document for exact affiliation information.)
AES Conference:
30th International Conference: Intelligent Audio Environments (March 2007)
Paper Number:
5
Publication Date:
March 1, 2007
Subject:
Intelligent Audio Environments
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.