In This Section
Reflecting on Reflections - June 2014
Audibility of a CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback - September 2007
Quiet Thoughts on a Deafening Problem - May 2014
AES Journal Forum
Synthesis of Spatially Extended Virtual Source with Time-Frequency Decomposition of Mono Signals
Auditory displays, driven by nonauditory data, are often used to present a sound scene to a listener. Typically, the sound field places sound objects at different locations, but the scene becomes aurally richer if the perceived sonic objects have a spatial extent (size), called volumetric virtual coding. Previous research in virtual-world Directional Audio Coding has shown that spatial extent can be synthesized from monophonic sources by applying a time-frequency-space decomposition, i.e., randomly distributing time-frequency bins of the source signal. This technique does not guarantee a stable size and the timbre can degrade. This study explores how to optimize volumetric coding in terms of timbral and spatial perception. The suggested approach for most types of audio uses an STFT window size of 1024 samples and then distributes the frequency bands from lowest to highest using the Halton sequence. The results from two formal listening experiments are presented.
Download Now (680 KB)
No AES members have commented on this paper yet.
Subscribe to this discussion
Start a discussion!
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.