This paper describes an interactive multichannel audio system linked to the Virtual Reality Modeling Language (VRML). In this system dry sources, source positions, and simulated room impulse responses are transmitted together with a room shape composed with VRML. The multichannel stereo sound is resynthesized at the receiver, and the graphics of the room are displayed on the screen. The reproduced sound is automatically changed in real time as the listener moves his viewpoint in the VRML graphics. The required number of channels and the method of resynthesizing are discussed in detail. Synchronization of picture and multichannel sound effectively increases the sensation of reality.:
Authors:
Komiyama, Setsu; Okubo, Hiroyuki; Ono, Kazuho; Hiyama, Koichiro; Asayama, Hiroshi
Affiliations:
NHK Science and Technical Research Laboratories, Setagaya, Tokyo, Japan ; Timeware Corporation, Shinagawa, Tokyo, Japan(See document for exact affiliation information.)
AES Convention:
109 (September 2000)
Paper Number:
5245
Publication Date:
September 1, 2000
Subject:
Multichannel Sound
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.