Recent advances in the research and development of Virtual Acoustic Environments allow the user to interact with the virtual space, usually by controlling their position within the virtual environment via joystick or other movement controls. More recently virtual acoustic environments have been developed which allow the user to physically move about the virtual space and hear the resulting changes in their own sound (hand claps, speech, singing) according to the geometry and room acoustics of the acoustically rendered environment. This paper will report on the design and implementation of the Virtual Singing Studio - a real-time interactive loudspeaker-based room acoustics simulation for musical performance. The process of designing and implementing such a ‘vocally interactive’ virtual acoustic environment is examined and compared to ‘off-line’ auralization techniques. Whereas others have used synthetic reverberation techniques, the Virtual Singing Studio is based on real-time convolution of Ambisonic B-format room impulse responses which have been measured in an existing performance venue. Furthermore, particular challenges arise at all stages of the process due to the eventual user, the singer, being at once sound source and sound receiver. These challenges will be outlined and potential solutions explored. The paper will also report on objective testing of the simulation and initial subjective evaluations by singers.
Brereton, Jude; Murphy, Damian; Howard, David
Affiliation: AudioLab, Department of Electronics, University of York, UK
AES Conference: UK 25th Conference: Spatial Audio in Today’s 3D World (March 2012)
Paper Number: 09
Publication Date: March 25, 2012
Subject: Synthesis and Simulation
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.