With the increasing popularity of audiophile headphones in this decade, the need for mixing over headphones is on the rise. Studio engineers use headphones as a critical tool for checking their mixes over the headphones before publishing them. As dolby atmos music and surround sound music is currently regaining popularity, there is also an increasing need for having multi channel speaker setups and associated gear in the studio to produce music in such formats. Such systems are extremely expensive and time consuming to set up. In this engineering brief, we present virtual studio production tools for mixing and monitoring atmos and multichannel sound with personalized head-related transfer functions (HRTFs). This paper talks in detail how the acoustics of the studio, including speaker, and headphone responses are captured accurately for a truly immersive experience. The acoustic fingerprint of the studio is then integrated with the personalized HRTFs predicted using machine learning algorithms that use an ear image as an input. Such novel tools will bring the power of personalized spatial audio and dolby atmos production in hands of millions of at-home mixing engineers and producers.
Authors:
Sunder, Kaushik; Jain, Sunder
Affiliation:
Embody, San Mateo, CA, USA
AES Convention:
152 (May 2022)
eBrief:674
Publication Date:
May 2, 2022
Subject:
Binaural Audio
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.