In this paper, we propose a pixel-based attention (PBA) module for acoustic scene classification (ASC). By performing feature compression on the input spectrogram along the spatial dimension, PBA can obtain the global information of the spectrogram. Besides, PBA applies attention weights to each pixel of each channel through two convolutional layers combined with global information. In addition, the spectrogram applied after the attention weights is multiplied by the gamma coefficient and superimposed with the original spectrogram to obtain more effective spectrogram features for training the network model. Furthermore, this paper implements a convolutional neural network (CNN) based on PBA (PB-CNN) and compares its classification performance on task 1 of Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 Challenge with CNN based on time attention (TB-CNN), CNN based on frequency attention (FB-CNN), and pure CNN. The experimental results show that the proposed PB-CNN achieves the highest accuracy of 89.2% among the four CNNs, 1.9% higher than that of TB-CNN (87.3%), 2.2% higher than that of FB-CNN (86.6%), and 3% higher than that of pure CNN (86.2%). Compared with DCASE 2016’s baseline system, the PB-CNN improved by 12%, and its 89.2% accuracy was the highest among all submitted single models.
Authors:
Wang, Xingmei; Xu, Yichao; Shi, Jiahao; Teng, Xuyang
Affiliations:
College of Computer Science and Technology, Harbin Engineering University, Harbin, 150001, People’s Republic of China; College of Communication Engineering, Hangzhou Dianzi University, Hangzhou, 310018, People’s Republic of China(See document for exact affiliation information.)
JAES Volume 68 Issue 11 pp. 843-855; November 2020
Publication Date:
December 21, 2020
Download Now (588 KB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.