Is it easier to identify musicians by listening to their voices or their music? We show that for a small set of pop and rock songs, automatically-located singing segments form a more reliable basis for classification than using the entire track, suggesting that the singer's voice is more stable across different performances, compositions, and transformations due to audio engineering techniques than the instrumental background. The accuracy of a system trained to distinguish among a set of 21 artists improves by about 15% (relative to the baseline) when based on segments containing a strong vocal com-ponent, whereas the system suffers by about 35% (relative) when music-only segments are used. In another experiment on a smaller set, however, performance drops by about 35% (relative) when the training and test sets are selected from different albums, suggesting that the system is learning album-specific properties possibly related to audio production techniques, musical stylistic elements, or instrumentation, even when attention is directed toward the supposedly more stable vocal regions.
Authors:
Berenzweig, Adam L.; Ellis, Daniel P. W.; Lawrence, Steve
Affiliations:
Department of Electrical Engineering, Columbia Univerity, New York, NY ; NEC Research Institute, Princeton, NJ(See document for exact affiliation information.)
AES Conference:
22nd International Conference: Virtual, Synthetic, and Entertainment Audio (June 2002)
Paper Number:
000231
Publication Date:
June 1, 2002
Subject:
Virtual, Synthetic and Entertainment Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.