This study proposes computationally efficient binaural rendering for three-dimensional audio spaces. Instead of directly and individually convoluting each head-related impulse response (HRIR) within each sound object’s direction, only HRIRs of pre-defined representative directions are convoluted to the signals which are panned and summed up at representative positions. The panning is conducted adopting appropriate time shifts and gain adjustments to the original source signals, which approximately maintains waveforms of the source signals in synthesized sound images. We compared the obtained subjective and objective reproduction qualities with various representative HRIR direction arrangements using audio objects on a horizontal plane. Investigations indicated that the azimuth differences between representative HRIRs could be around 60?, and we then expect a reduction in computational complexity by approximately 50%–70% when 20–200 audio objects are rendered.
Authors:
Nishiguchi, Masayuki; Mizutani, Yuki; Watanabe, Kanji; Abe, Koji; Ishikawa, Tomokazu; Enomoto, Seigo
Affiliations:
Akita Prefectural University, Japan; Akita Prefectural University, Japan; Akita Prefectural University, Japan; Akita Prefectural University, Japan; Panasonic Holding Corporation, Japan; Panasonic Holding Corporation, Japan(See document for exact affiliation information.)
AES Convention:
153 (October 2022)
Paper Number:
10627
Publication Date:
October 19, 2022
Subject:
Spatial Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this Spatial Audio yet.
To be notified of new comments on this Spatial Audio you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this Spatial Audio then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.