In this paper we study how sound quality is evaluated by different groups of assessors, with different levels of hearing loss. Formal listening tests using the Basic Audio Quality scale were designed using 22 headphones spanning a wide range of qualities and sound quality characteristics. The tests were performed with two formally selected listening panels with normal hearing (NH), and mild (N2) or moderate (N3) hearing loss characteristics. It is shown that not only do the two panels evaluate the sound quality consistently within each panel, but also that there are systematic changes in the manner in which hearing loss impacts the evaluation and ranking of the devices under study. Using this data we successfully train machine learning algorithms to predict the sound quality for the two assessor type panels. The prediction performance for each panel is NH: RMSE = 7.1 ± 3.0, PCC = 0.91 ± 0.13; HI: RMSE = 8.7 ± 2.4, PCC = 0.91 ± 0.12. Whilst it may not be practical to run listening tests with multiple panels of assessors, we demonstrate here that machine learning based models can be practically and cost effectively employed to predict the perception of multiple assessor groups rapidly and simultaneously.
Authors:
Volk, Christer P.; Nordby, Jon; Stegenborg-Andersen, Tore; Zacharov, Nick
Affiliations:
FORCE Technology, SenseLab, Hørsholm, Denmark; Soundsensing, Oslo, Norway(See document for exact affiliation information.)
AES Convention:
150 (May 2021)
Paper Number:
10494
Publication Date:
May 24, 2021
Subject:
Psychoacoustics
Download Now (364 KB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.