We present the engineering underlying a consumer application to help music industry professionals find audio clips and samples of personal interest within their large audio libraries typically consisting of heterogeneously-labeled clips supplied by various vendors. We enable users to train an indexing system using their own custom tags (e.g., instruments, genres, moods), by means of convolutional neural networks operating on spectrograms. Since the intended users are not data scientists and may not possess the required computational resources (i.e., Graphics Processing Units, GPUs), our primary contributions consist of (i) designing an intuitive user experience for a local client application to help users create representative spectrogram datasets, and (ii) "seamless" integration with a cloud-based GPU server for efficient neural network training.
Hawley, Scott; Bagley, Jason; Porter, Brett; Traynham, Daisey
Affiliations: Belmont University, Nashville, TN, USA; Art+Logic, Pasadena, CA, USA; Art+Logic, Fanwood, NJ, USA(See document for exact affiliation information.)
AES Convention: 147 (October 2019) eBrief:562
Publication Date: October 8, 2019
Subject: Applications in Audio
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.