Community

AES Conference Papers Forum

Comparison of Performance in Binaural Sound Source Localisation using Convolutional Neural Networks for differing Feature Representations

Document Thumbnail

Binaural Sound Source Localisation is increasingly being achieved by means of the Convolutional Neural Network (CNN). These networks take in a Time-Frequency representation of audio as an input, and use this to estimate the direction of arrival of a sound. In previous works, different Time-Frequency representations have been used, but never only using solely magnitude spectra, leading to a lack of understanding in the importance of this in full azimuthal binaural sound source localisation. This work aims to address that gap by testing the performance of a CNN trained and tested on four different Time-Frequency representations: Mel-Spectrogram, Gammatonegram, Mel-Frequency Cepstrum, and Gammatone-Frequency Cepstrum. From this test, it was found that Spectrograms are suitable for the task of full azimuthal binaural sound source localisation.

Authors:
Affiliations:
Express Paper 59; AES Convention 154; May 2023
Publication Date:
Subject:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this Spatial Audio yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this Spatial Audio you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this Spatial Audio and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this Spatial Audio then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society