To augment the task of navigation and orientation of blind individuals, a new travel aid uses 3D scene sonification to present information about the environment using nonverbal audio. The model is composed of two classes of objects: obstacles and planes. The algorithm uses scene image segmentation, personalized spatial audio, musical tones, and sonar-like sound patterns. Individually measured head-related transfer functions were used to provide users with the illusion of sounds originating from the locations of sonified scene elements. Using a segmented and parametric description overcomes the sensory mismatch between visual and auditory perception. In a pilot study using both blind and sighted volunteers, subjects were able to utilize the prototype for spatial orientation and obstacle avoidance after a few minutes of training, attaining 90% accuracy in estimating the direction and depth of obstacles.
Authors:
Bujacz, Michal; Skulimowski, Piotr; Strumillo, Pawel
Affiliation:
Lodz University of Technology, Lodz, Poland
JAES Volume 60 Issue 9 pp. 696-708; September 2012
Publication Date:
October 9, 2012
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.