In our daily lives, we usually perceive an event via more than one sensory modality (e.g., vision, hearing, touch). Therefore, multimodal integration and interactions play an important role when we use objects and for event recognition in our environment. A virtual environment (VE) is a computer simulation of a realistic-looking and interactive world. VEs should take into account the multisensory nature of humans and communicate with the user not only through vision but also through other modalities. In addition to vision, hearing and touch are the most commonly used communication channels. Recently, a variety of products with additional tactile input and output capabilities have been developed (e.g., Apple iPhone and other touch-screen devices, NintendoWii, etc.). Some of these devices provide new possibilities for interacting with a computer, including the auditory modality. Binaural synthesis and rendering are becoming key technologies for multimedia products. Virtual environments are no longer limited to academic research; they have commercial applications, particularly in medicine, game, and entertainment industries. Thus, the quality of VEs is becoming increasingly important. User interaction with a VE is a key issue in the perception of its quality. Several studies have discussed the quality of displays, input and output devices (for different modalities) as well as software and hardware issues; however, multimodal user interaction should also be examined. This paper focuses on the parameters that influence the quality of audio-tactile VEs.
Altinsoy, M. Ercan
Affiliation: Dresden University of Technology, Chair of Communication Acoustics, Dresden, Germany
JAES Volume 60 Issue 1/2 pp. 38-46; January 2012
Publication Date: March 20, 2012
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.