Community

AES Journal Forum

Synthesis of Spatially Extended Virtual Source with Time-Frequency Decomposition of Mono Signals

Document Thumbnail

Auditory displays, driven by nonauditory data, are often used to present a sound scene to a listener. Typically, the sound field places sound objects at different locations, but the scene becomes aurally richer if the perceived sonic objects have a spatial extent (size), called volumetric virtual coding. Previous research in virtual-world Directional Audio Coding has shown that spatial extent can be synthesized from monophonic sources by applying a time-frequency-space decomposition, i.e., randomly distributing time-frequency bins of the source signal. This technique does not guarantee a stable size and the timbre can degrade. This study explores how to optimize volumetric coding in terms of timbral and spatial perception. The suggested approach for most types of audio uses an STFT window size of 1024 samples and then distributes the frequency bands from lowest to highest using the Halton sequence. The results from two formal listening experiments are presented.

Open Access

Open
Access

Authors:
Affiliation:
JAES Volume 62 Issue 7/8 pp. 467-484; July 2014
Publication Date:


Download Now (680 KB)

This paper is Open Access which means you can download it for free.

No AES members have commented on this paper yet.

Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Start a discussion!

If you would like to start a discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society