A software system has been developed for producing virtual audio-visual performances. An animated model plays MIDI music, with fingers synchronized to each note and sound synthesized by physical modeling. The room's impulse response is calculated with ray tracing and image-source methods and used together with the listener's head-related transfer function (HRTF) to produce auralized three-dimensional sensation in the visualized environment.
Authors:
Takala, T.; Hänninen, R.; Välimäki, V.; Savioja, L.; Huopaniemi, J.; Huotilainen, T.
Affiliation:
Helsinki University of Technology, Espoo, Finland
AES Convention:
100 (May 1996)
Paper Number:
4229
Publication Date:
May 1, 1996
Subject:
Auralization and Virtual Reality
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.