Acoustic event classification is of interest for various audio applications. The aim of this paper is to investigate the usage of a number of speech and audio based features in the task of acoustic event classification. Several features that originate from audio signal analysis are compared with features typically used in speech processing such as mel-frequency cepstral coefficients (MFCCs). In addition, the approaches to fuse the information obtained from multichannel recordings of an acoustic event are investigated. Experiments are performed using a Gaussian mixture model (GMM) classifier and audio signals recorded using several scattered microphones.
Krause, Daniel; Kowalczyk, Konrad
Affiliation: AGH University of Science and Technology, Kraków, Poland
AES Convention: 145 (October 2018) Paper Number: 10103
Publication Date: October 7, 2018
Subject: Semantic Audio
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.