Community

AES Convention Papers Forum

A Hierarchical Sonification Framework Based on Convolutional Neural Network Modeling of Musical Genre

Document Thumbnail

Convolutional neural networks have satisfactory discriminative performances for various music-related tasks. However, the models are implemented as “black boxes” and thus their processed representations are non-transparent for manual interactions. In this paper, a hierarchical sonification framework with a musical genre modeling module and a sample-level sonification module has been implemented for aural interaction. The modeling module trains a convolutional neural network from musical signal segments with genre labels. Then the sonification module performs sample-level modification according to each convolutional layer, where lower sonification levels produce auralized pulses and higher sonification levels produce audio signals similar to the input musical signal. The usage of the proposed sonification framework is demonstrated using a musical stylistic morphing example.

Authors:
Affiliation:
AES Convention: Paper Number:
Publication Date:
Subject:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society