Community

AES Journal Forum

Extracting Room Reverberation Time from Speech Using Artificial Neural Networks

Document Thumbnail

A novel method to extract the reverberation time from reverberated speech utterances is presented. In this study, speech utterances are restricted to pronounced digits; uncontrolled discourse is not considered. The reverberation times considered are wide band, within the frequency range of speech utterances. A multilayer feed forward neural network is trained on speech examples with known reverberation times generated by a room simulator. The speech signals are preprocessed by calculating short-term rms values. A second decision-based neural network is added to improve the reliability of the predictions. In the retrieve phase, the trained neural networks extract room reverberation times from speech signals picked up in the rooms to an accuracy of 0.1 s. This provides an alternative to traditional measurement methods and facilitates the occupied measurement of room reverberation times.

Authors:
Affiliation:
JAES Volume 49 Issue 4 pp. 219-230; April 2001
Publication Date:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society