Ontologies have been established for knowledge sharing and are widely used for structuring domains of interests conceptually. With growing amount of data on the internet, manual annotation and development of ontologies becomes critical. We propose a hybrid system to develop ontologies from audio signals automatically, in order to provide assistance to ontology engineers. The method is examined using various musical instruments, from wind and string families, that are classified using timbre features extracted from audio. To obtain models of the analysed instrument recordings, we use K-means clustering and determine an optimised codebook of Line Spectral Frequencies (LSFs) or Mel-frequency Cepstral Coefficients (MFCCs). The system was tested using two classification techniques, Multi-Layer Perceptron (MLP) neural network and Support Vector Machines (SVM). We then apply Formal Concept Analysis (FCA) to derive a lattice of concepts which is transformed into an ontology using the Ontology Web Language (OWL). The system was evaluated using Multivariate Analysis of Variance (MANOVA), with the feature and classifier attributes as independent variables and the lexical and taxonomic evaluation metrics as dependent variables.
Kolozali, Sefki; Fazekas, György; Barthet, Mathieu; Sandler, Mark
Affiliation: Queen Mary University of London, London, UK
AES Conference: 53rd International Conference: Semantic Audio (January 2014)
Paper Number: P1-7
Publication Date: January 27, 2014
Subject: Semantic Audio Description and Ontologies
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.