This research proposes an approach for computing the time offsets between audio sequences that contain musical sounds from different instruments produced in a distributed way and which have a set of weak features that are not useful as alignment points. It is therefore necessary to apply transformations in order to find a set of distinctive features to compute the offset values in a suitable way. The main issue that occurs with such a system is nonlinearity that does not allow the delay to be predicted by using a linear function. To solve this problem, the authors propose a set of long short-term memory (LSTM) layers to create a neural network model capable of learning such features transformations in a supervised approach, using a gradient-descent optimizer. This demonstrates the use of a recurrence matrix to extract timing information from a set of transformed features given by the neural network output. With this approach, the algorithm can classify up to 60% of a specific combination from the MedleyDB data set, and reduce the search space to five possibilities with accuracy up to 90% while keeping the precision of 10 ms. This performance is equal or better than state-of-the-art methods.
Authors:
Pereira, Igor; Distante, Cosimo; Silveira, Luiz F.; Gonçalves, Luiz
Affiliations:
Institute of Applied Sciences and Intelligent Systems, Lecce, Italy; Federal University of Rio Grande do Norte, Natal, Brazil(See document for exact affiliation information.)
JAES Volume 68 Issue 3 pp. 157-167; March 2020
Publication Date:
March 15, 2020
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.