This paper proposes a new method to generate audio in the context of interactive animations driven by a physics engine. Our approach aims at bridging the gap between direct playback of audio recordings and physically-based synthesis by retargetting audio grains extracted from the recordings according to the output of a physics engine. In an off-line analysis task, we automatically segment audio recordings into atomic grains. The segmentation depends on the type of contact event and we distinguished between impulsive events, i.e. impacts or breaking sounds, and continuous events, i.e. rolling or sliding sounds. We segment recordings of continuous events into sinusoidal and transient components, which we encode separately. A technique similar to matching pursuit is used to represent each original recording as a compact series of audio grains. During interactive animations, the grains are triggered individually or in sequence according to parameters reported from the physics engine and/or user-defined procedures. A first application is simply to reduce the size of the original audio assets. Above all, our technique allows for synthesizing non-repetitive sounding events and provides extended authoring capabilities.
Authors:
Picard, CĂ©cile; Tsingos, Nicolas; Faure, Francois
Affiliations:
INRIA Sophia-Antipolis, Sophia-Antipolis, France; INRIA Rhone-Alpes, Grenoble, France, Université de Grenoble and CNRS, Grenoble, France(See document for exact affiliation information.)
AES Conference:
35th International Conference: Audio for Games (February 2009)
Paper Number:
25
Publication Date:
February 1, 2009
Subject:
Audio for Games: Retargeting Example Sounds to Interactive Physics-Driven Animations
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.