Community

AES Journal Forum

A Robust and Computationally Efficient Speech/Music Discriminator

Document Thumbnail

A New method for discriminating between speech and music signals is introduced. The strategy is based on the extraction of four features, whose values are combined linearly into a unique parameter. This parameter is used to distinguish between the two kinds of signals. The method has achieved an accuracy superior to 99%, even for severely degraded and noisy signals. Moreover, the low dimensionality of the feature space, together with a very simple information-merging technique, has resulted in a remarkable robustness to new situations. The low computational complexity of the method makes it appropriate for applications that demand real-time operation. Finally excellent resolution for the segmentation of audio streams is achieved by manipulating the analyzed data properly.

Authors:
Affiliation:
JAES Volume 54 Issue 7/8 pp. 571-588; July 2006
Publication Date:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society