Effective speech dereverberation is a prerequisite in such applications as hands-free telephony, voice-based human-machine interfaces, and hearing aids. Blind multichannel speech dereverberation methods based on multichannel linear prediction (MCLP) can estimate the dereverberated speech component without any knowledge of the room acoustics. This can be achieved by estimating and subtracting the undesired reverberant component from the reference microphone signal. This report presents a general framework that exploits sparsity in the time–frequency domain of a MCLP-based speech dereverberation. The framework combines a wideband or a narrowband signal model with either an analysis or a synthesis sparsity prior, and generalizes state-of-the-art MCLP-based speech dereverberation methods.
Authors:
Jukic, Ante; van Waterschoot, Toon; Gerkmann, Timo; Doclo, Simon
Affiliations:
University of Oldenburg, Department of Medical Physics and Acoustics, and Cluster of Excellence Hearing4All, Oldenburg, Germany; KU Leuven, Department of Electrical Engineering (ESAT-STADIUS / ETC), Leuven, Belgium; University of Hamburg, Department of Informatics, Hamburg, Germany(See document for exact affiliation information.)
JAES Volume 65 Issue 1/2 pp. 17-30; January 2017
Publication Date:
February 16, 2017
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.