Loudness normalization based on clean dialogue loudness improves consistency of the dialogue level compared to the loudness of the full program measured at speech or signal activity. Existing loudness metering methods can not estimate clean dialogue loudness from mixture signals comprising speech and background sounds, e.g. music, sound effects or environmental sounds. This paper proposes to train deep neural networks with input signals and target values obtained from isolated speech and backgrounds to estimate the clean dialogue loudness. Furthermore, the proposed method outputs estimates for loudness levels of background and mixture signal, and Voice Activity Detection. The presented evaluation reports a mean absolute error of 1.5 LU for momentary loudness, 0.5 LU for short-term and 0.27 LU for long-term loudness of the clean dialogue given the mixture signal.
Authors:
Uhle, Christian; Kratschmer, Michael; Travaglini, Alessandro; Neugebauer, Bernhard
Affiliations:
Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany; International Audio Laboratories Erlangen, Germany; DSP Solutions, Regensburg, Germany(See document for exact affiliation information.)
AES Conference:
2020 AES International Conference on Audio for Virtual and Augmented Reality (August 2020)
Paper Number:
10479
Publication Date:
August 13, 2020
Subject:
Acoustic Measurement
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.