The results of experiments in which subjects rated the perceived quality of speech and music that had been subjected to various forms of both linear and nonlinear distortion are reported. Experiment 1 made use of artificial distortions (such as ripples in frequency response combined with peak clipping). Experiment 2 included both artificial distortions and real distortions introduced by transducers. The results were compared with the predictions of a new model based on a weighted sum of predictions for linear distortion alone and for nonlinear distortion alone. There was a very good correspondence between the obtained and predicted ratings. Correlations were greater than 0.85 for speech stimuli and 0.90 for music stimuli. It is concluded that the new model can predict accurately the perceived quality of speech and music subjected to combined linear and nonlinear distortion.
Authors:
Moore, Brian C. J.; Tan, Chin-Tuan; Zacharov, Nick; Mattila, Ville-Veikko
Affiliations:
University of Cambridge, Cambridge, UK; Nokia Research Center, Tampere, Finland(See document for exact affiliation information.)
JAES Volume 52 Issue 12 pp. 1228-1244; December 2004
Publication Date:
December 15, 2004
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.