In this paper, we explore a machine learning approach to evaluate audio quality for high sound pressure level (SPL) smartphone recordings. Our study is based on perceptual evaluations conducted by technical experts on eight audio sub-attributes (tonal balance, treble, midrange, bass, dynamics, temporal artifacts, spectral artifacts, and other artifacts) of audio quality for 121 smartphones released from 2019 to 2021. To address this task, we propose a Convolutional Neural Network (CNN) model, which proves to be a simple yet effective choice. We employ a pre-augmentation technique to enhance the training dataset size, creating a comprehensive dataset comprising recording spectrograms and corresponding perceptual evaluation scores. Our findings indicate that while the CNN model has certain limitations, it demonstrates promising capabilities in predicting evaluation scores, particularly in aspects of tonal balance, bass, and spectral artifact assessment.
Authors:
Guelen, Philippe; Zhao, Dan; Terra Pizutti Dos Santos, Pietro;
Drouadene, Arthur; Bacle, Justin
Affiliations:
DXOMARK; DXOMARK; DXOMARK; DXOMARK(See document for exact affiliation information.)
Express Paper 130; AES Convention 155; October 2023
Publication Date:
October 25, 2023
Subject:
Perception
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this Perception yet.
To be notified of new comments on this Perception you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this Perception then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.