The aim of this paper was to create an automatic sound source classification framework for recordings captured with a microphone array and evaluate the sound source separation algorithm impact on the classification results. The preprocessing related to the said evaluation concerned convolving the dataset samples with impulse responses captured with a microphone array, as well as mixing the samples together to simulate their co-presence in a virtual recording scene. This way, the evaluation of the separation algorithm impact on classification results was possible. Furthermore, such approach saved multiple hours of labour that would need to be spent on the recording process itself. Finally, the classification results delivered by different models were evaluated and compared.
Authors:
Chrul, Michal; Ruminski, Andrzej; Zernicki, Tomasz; Lukasik, Ewa
Affiliations:
Zylia sp. z o. o., Poznan, Poland; Gdansk University of Technology, Gdansk, Poland(See document for exact affiliation information.)
AES Conference:
2020 AES International Conference on Audio for Virtual and Augmented Reality (August 2020)
Paper Number:
10481
Publication Date:
August 13, 2020
Subject:
Music Analysis
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.