Immersive audio for interactive gaming is necessarily processed and mixed in real time as it is being rendered on the game audio playback platform. It is generally assumed that music and movie soundtracks require no comparable processing during playback because listeners typically provide no real-time input that might affect the final rendering. In reality, pre-packaged audio is being delivered to music and movie playback platforms in increasingly diverse forms. The result is that mismatches between the spatial audio format, bit depth, and frequency range of the content and of the playback system pose an emerging problem for which sophisticated playback processing may be an appropriate response. This paper presents a formal statement of the mismatch problem and proposes a unified solution using frequency-domain processing to perform "partial unmixing" of the pre-packaged content. Lastly, we show how this can enable a new music/movie listening experience rooted in the concept of "personalized audio."
Author:
Dolson, Mark
Affiliation:
Creative Advanced Technology Center, Scotts Valley, CA
AES Convention:
117 (October 2004)
Paper Number:
6295
Publication Date:
October 1, 2004
Subject:
Audio Recording and Reproduction
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.