This paper presents a narrative content-driven virtual reality (VR) experiment using novel biosensing technology to evaluate emotional response to a complex, layered soundscape that includes discrete and ambient sound events, music, and speech. Stimuli were presented in a spatialized vs mono audio format, to determine whether head-tracked spatial audio exerts an effect on physiologically measured emotional response. The extent to which a listener’s sense of immersion in a VR environment can be increased based on the spatial characteristics of the audio is also examined, both through the analysis of self-reported immersion scores and physical movement data. Finally, the study explores the relationship between the creators’ own intentions for emotion elicitation within the stimulus material, and the recorded emotional responses that matched those intentions in both the spatialized and non-spatialized case. The results of the study provide evidence that spatial audio can significantly affect emotional response in Immersive Virtual Environments (IVEs). In addition, self-reported immersion metrics favour a spatial audio experience as compared to a non-spatial version, while physical movement data shows increased user intention and focused localization in the spatial vs non-spatial audio case. Finally, strong correlations were found between the creators of the sound.
Warp, Richard; Zhu, Michael: Kiprijanovska, Ivana; Wiesler, Jonathan; Stafford, Scot; Mavridou, Ifigeneia
Affiliations: Pollen Music Group, San Franciso, CA, USA; emteq labs, Sussex Innovation Centre, Brighton, UK(See document for exact affiliation information.)
AES Convention: 152 (May 2022) Paper Number: 10590
Publication Date: May 2, 2022
Subject: Extended Reality Audio
Download Now (559 KB)
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.