With the growing amount of multimedia data available everywhere and the necessity to provide efficient methods for browsing and indexing this plethora of audio content, automated musical similarity search and retrieval has gained considerable attention in recent years. This paper presents a system which combines a set of perceptual low level features with appropriate classification strategies for the task of retrieving similar sounding songs in a database. A method for analyzing the classification results while avoiding time consuming subjective listening tests for an optimum feature selection and combination is presented. It is based on a calculated ''similarity index'' which reflects the similarity between specifically embedded smilarity pairs. The system's performance as well as the usefulness of the analyzing method is evaluated by a subjective listening test.
Authors:
Kastner, Thorsten; Herre, Juergen; Allamanche, Eric; Hellmuth, Oliver; Ertel, Christian; Schalek, Marion
Affiliation:
Fraunhofer Institute for Integrated Circuits, Erlangen, Germany
AES Conference:
25th International Conference: Metadata for Audio (June 2004)
Paper Number:
5-3
Publication Date:
June 1, 2004
Subject:
Metadata for Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.