Because the human brain is often optimal for detecting subtle patterns, this paper explores a novel transformation that maps numerical data into sound. In this research, a set of data taken from head-related transfer functions was used to create physical objects (bells made from stainless steel) whose acoustics were then presented to listeners. The technique is called acoustic sonification. Listeners were able to hear differences in pitch and timbre of bells that were constructed from different datasets, while bells constructed from similar datasets sounded similar. Modulating the shape of a bell with a dataset can influence the acoustic spectrum in a way that results in audible differences |even though there was no apparent visual difference. Acoustic sonification can take advantage of auditory pattern recognition.
Author:
Barrass, Stephen
Affiliation:
University of Canberra, Canberra, Australia
JAES Volume 60 Issue 9 pp. 709-715; September 2012
Publication Date:
October 9, 2012
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.