Methods for automatic sound and music classification are of great value when trying to organise the large amounts of unstructured, user-contributed audio content uploaded to online sharing platforms. Currently, most of these methods are based on the audio signal, leaving the exploitation of users’ annotations or other contextual data rather unexplored. In this paper, we describe a method for the automatic classification of audio clips which is solely based on user-supplied tags. As a novelty, the method includes a tag expansion step for increasing classification accuracy when audio clips are scarcely tagged. Our results suggest that very high accuracies can be achieved in tag-based audio classification (even for poorly or badly annotated clips), and that the proposed tag expansion step can, in some cases, significantly increase classification performance. We are interested in the use of the described classification method as a first step for tailoring assistive tagging systems to the particularities of different audio categories, and as a way to improve the overall quality of online user annotations.
Authors:
Font, Frederic; SerrĂ , Joan; Serra, Xavier
Affiliations:
Artificial Intelligence Research Institute (IIIA-CSIC), Barcelona, Spain; Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain(See document for exact affiliation information.)
AES Conference:
53rd International Conference: Semantic Audio (January 2014)
Paper Number:
2-3
Publication Date:
January 27, 2014
Subject:
Semantic Audio Description and Ontologies
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.