Knowing the number of sources present in a mixture is useful for many computer audition problems such as polyphonic music transcription, source separation, and speech enhancement. Most existing algorithms for these applications require the user to provide this number thereby limiting the possibility of complete automatization. In this paper we explore a few probabilistic and machine learning approaches for an autonomous source number estimation. We then propose an implementation of a multi-class classification method using convolutional neural networks for musical polyphony estimation. In addition, we use these results to improve the performance of an instrument classifier based on the same dataset. Our final classification results for both the networks, prove that this method is a promising starting point for further advancements in unsupervised source counting and separation algorithms for music and speech.
Authors:
Kareer, Saarish; Basu, Sattwik
Affiliation:
University of Rochester, Rochester, NY, USA
AES Convention:
144 (May 2018)
eBrief:434
Publication Date:
May 14, 2018
Subject:
e-Brief Posters—2
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.