Many common consumer devices use a short sound indication for declaring various modes of their functionality, such as the start and the end of their operation. This is likely to result in an intuitive auditory human-machine interaction, imputing a semantic content to the sounds used. In this paper we investigate sound patterns mapped to "Start" and "End" of operation manifestations and explore the possibility such semantics’ perception to be based either on users’ prior auditory training or on sound patterns that naturally convey appropriate information. To this aim, listening and machine learning tests were conducted. The obtained results indicate a strong relation between acoustic cues and semantics along with no need of prior knowledge for message conveyance.
Authors:
Drossos, Konstantinos; Kotsakis, Rigas; Pappas, Panos; Kalliris, George M.; Floros, Andreas
Affiliations:
Ionian University, Corfu, Greece; Aristotle University of Thessaloniki, Thessaloniki, Greece; Technological Educational Institute of Ionian Islands, Lixouri, Greece(See document for exact affiliation information.)
AES Convention:
134 (May 2013)
Paper Number:
8812
Publication Date:
May 4, 2013
Subject:
Education and Semantic Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.