Due to the variability in characteristics of audio scenes, some scenes can naturally be recognized earlier than others. In this work, rather than using equal-length snippets for all scene categories, as is common in the literature, we study to which temporal extent an audio scene can be reliably recognized given state-of-the-art models. Moreover, as model fusion with deep network ensemble is prevalent in audio scene classi?cation, we further study whether, and if so, when model fusion is necessary for this task. To achieve these goals, we employ two single-network systems relying on a convolutional neural network and a recurrent neural network for classi?cation as well as early fusion and late fusion of these networks. Experimental results on the LITIS-Rouen dataset show that some scenes can be reliably recognized with a few seconds while other scenes require signi?cantly longer durations. In addition, model fusion is shown to be the most bene?cial when the signal length is short.
Authors:
Phan, Huy; Chén, Oliver Y.; Koch, Philipp; Pham, Lam; McLoughlin, Ian; Mertins, Alfred; De Vos, Maarten
Affiliations:
University of Kent, UK; University of Oxford, UK; University of Lübeck, Germany; University of Kent, UK; University of Kent, UK; University of Lübeck, Germany; University of Oxford, UK(See document for exact affiliation information.)
AES Conference:
2019 AES International Conference on Audio Forensics (June 2019)
Paper Number:
16
Publication Date:
June 8, 2019
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.