Community

AES Convention Papers Forum

Combining Visual and Acoustic Modalities to Ease Speech Recognition by Hearing Impaired People

Document Thumbnail

The aim of the research work presented is to show a system that facilitates speech training for hearing impaired people. The system engineered combines both visual and acoustic speech data acquisition and analysis modules. The Active Shape Model method is used for extracting visual speech features from the shape and movement of the lips. The acoustic features extraction involves mel-cepstral analysis. Artificial Neural Networks are utilized as the classifier, feature vectors extracted combine both modalities of the human speech. Additional experiments with the degraded acoustic and/or visual information are carried out in order to test the system robustness against various distortions affecting the signals.

Authors:
Affiliation:
AES Convention: Paper Number:
Publication Date:
Subject:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society