Community

AES Convention Papers Forum

Extraction of Speech Transmission Index from Speech Signals Using Artificial Neural Networks

Document Thumbnail

This paper presents a novel method to extract Speech Transmission Index (STI) from reverberated speech utterances using an artificial neural network. The convolutions of anechoic speech signals and simulated impulse responses of rooms of various kinds are used to train the artificial neural network. A time to frequency domain transformation algorithm is proposed as the pre-processor. A multi-layered feed forward neural network trained by back-propagation is adopted. Once trained, the neural network can accurately estimate Speech Transmission Index from speech signals received by a microphone in rooms. This approach utilises a naturalistic sound source, speech, and hence has potential to facilitate occupied measurement.

Authors:
Affiliation:
AES Convention: Paper Number:
Publication Date:
Subject:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society