Community

AES Journal Forum

A General Framework for Incorporating Time—Frequency Domain Sparsity in Multichannel Speech Dereverberation

Document Thumbnail

Effective speech dereverberation is a prerequisite in such applications as hands-free telephony, voice-based human-machine interfaces, and hearing aids. Blind multichannel speech dereverberation methods based on multichannel linear prediction (MCLP) can estimate the dereverberated speech component without any knowledge of the room acoustics. This can be achieved by estimating and subtracting the undesired reverberant component from the reference microphone signal. This report presents a general framework that exploits sparsity in the time–frequency domain of a MCLP-based speech dereverberation. The framework combines a wideband or a narrowband signal model with either an analysis or a synthesis sparsity prior, and generalizes state-of-the-art MCLP-based speech dereverberation methods.

Authors:
Affiliations:
JAES Volume 65 Issue 1/2 pp. 17-30; January 2017
Publication Date:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society