Sonification is the systematic representation of data using sounds, such as text-to-speech, color readers, Geiger counters, acoustic radars, and MIDI synthesizers. This paper surveys existing sonification systems and suggests taxonomy of algorithms and devices. The sonification process requires an artificial mapping between two sensory modalities using a model based on either psychoacoustics or artificial heuristics. In the former, the paradigm exploits the natural discrimination of the source spatial parameters (distance, azimuth, and elevation, for instance). In the latter, the paradigm creates an artificial match between graphical and auditory cues. Artificial sonification uses nonspatial characteristics of the sound, such as frequency, brightness or timbre, formants, saturation, and time intervals, which are not related to the physical characteristics or parameters of objects or surroundings.
Authors:
Sanz, Pablo Revuelta; Mezcua, Belén Ruiz; Pena, José M. Sánchez; Walker, Bruce N.
Affiliations:
Carlos III University of Madrid, Lega, Spain; GeorgiaTech, Atlanta, GA, USA(See document for exact affiliation information.)
JAES Volume 62 Issue 3 pp. 161-171; March 2014
Publication Date:
March 20, 2014
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this report yet.
To be notified of new comments on this report you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this report then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.