Chord recognition systems use temporal models to post-process frame-wise chord predictions from acoustic models. Traditionally, first-order models such as Hidden Markov Models were used for this task, with recent works suggesting to apply Recurrent Neural Networks instead. In this paper, we argue that learning complex temporal models at the level of audio frames is futile on principle, and that non-Markovian models do not perform better than their first-order counterparts. We support our argument through experiments on the McGill Billboard dataset. We show that when learning complex temporal models at the frame level, improvements in chord sequence modelling are marginal and that these improvements do not translate when applied within a full chord recognition system.
Authors:
Korzeniowski, Filip; Widmer, Gerhard
Affiliation:
Johannes Kepler University, Linz, Austria
AES Conference:
2017 AES International Conference on Semantic Audio (June 2017)
Paper Number:
P2-6
Publication Date:
June 13, 2017
Subject:
Semantic Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.