Using conventional sound design, the audio signal in virtual reality applications is often rendered as a static stereophonic signal. It is accompanied by a visual signal that allows for interactive behavior such as looking around. In the current test, the in?uence of spatial offset between the audio and visual signals is investigated using reaction time measurements in a word recognition task. The audio-visual offset is introduced by a video presented at horizontal offset angles between ±21°, accompanied with a static central audio. Measurements are compared to reaction times from a test where both audio and visual signal are presented with the same angle. Results show that audio-visual offsets between 10° and 20° cause signi?cant differences in reaction time compared to spatially matched presentation.
Authors:
Stenzel, Hanne; Jackson, Philip J. B.; Francombe, Jon
Affiliations:
University of Surrey, Guildford, UK; BBC Research and Development, Salford, UK(See document for exact affiliation information.)
AES Conference:
2018 AES International Conference on Audio for Virtual and Augmented Reality (August 2018)
Paper Number:
P2-2
Publication Date:
August 11, 2018
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.