In order for social robots to be truly successful, they need the ability to orally communicate with humans, providing feedback and accepting commands. Social robots need automatic speech recognition (ASR) tools that function with different users, using different languages, voice pitches, pronunciations, and speech speeds over a wide range of sound and noise levels. This paper describes different methodologies for voice activity detection and noise elimination when used with ASR-based oral interaction within an affordable budget robot. Acoustically quasi-stationary environments are assumed, which in conjunction with the high background noise of the robot’s microphones makes the ASR challenging. This work has been performed in the context of project RAPP, which attempts to deliver a cloud repository of applications and services that can be utilized by heterogeneous robots, aiming at assisting people with a range of disabilities. Results show that noise estimation and elimination techniques are necessary for successfully performing ASR in environments with quasi-stationary noise.
Authors:
Tsardoulias, Emmanouil; Thallas, Aristeidis G.; Symeonidis, Andreas L.; Mitkas, Pericles A.
Affiliations:
Centre of Research & Technology, Thermi, Thessaloniki, Greece; Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece(See document for exact affiliation information.)
JAES Volume 64 Issue 7/8 pp. 514-524; July 2016
Publication Date:
August 11, 2016
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.