Advanced audio processing for interactive media is in demand for a wide range of applications and devices. The requirements for interactive media contexts tend to impose both device-specific and style-specific constraints. The goal of the present research is to develop a robust approach to interactive audio that may be persistent across diverse media contexts. This project adopts a structural approach to the relationship of interactive sounds to interactive graphical media. We refer to this as a model-to-model architecture. Sound production is decoupled from specific media styles, enabling abstractions using feature analysis of simulation output that can be adapted to a variety of media devices. The identifying metaphor for this approach is playing with sounds through graphical representations and interactive scenarios.
Authors:
Choi, Insook; Bargar, Robin
Affiliations:
City University of New York, Brooklyn, NY, USA; Columbia College Chicago, Chicago, IL, USA(See document for exact affiliation information.)
AES Convention:
131 (October 2011)
Paper Number:
8491
Publication Date:
October 19, 2011
Subject:
Applications in Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.