JOURNAL FEATURES
A quick guide to recent selected AES Journal features
< Back to table of contents
A working group of the AES Technical Committee on Acoustics and Sound Reinforcement recently published their report, "Understanding and managing sound exposure and noise pollution at outdoor events." Adam Hill and Elena Shabalina run the Technical Committee that wrote this report, and summarize its key aspects in this short feature. The report is intended to present the current state of affairs surrounding the issue of outdoor event-related sound and noise. The two principal areas of investigation are sound exposure on-site and noise pollution off-site.
During the recent “Virtual Vienna” AES Convention, held online during the COVID crisis, research and development engineers from around the world presented papers and e-briefs on some fascinating aspects of transducer systems. Of particular note was the interest in micro transducers, particularly MEMS (micro electro-mechanical systems) technology for loudspeakers, headphones, and microphones. A couple of papers dealt with DIY techniques for loudspeaker arrays and headphones, while we also learned about remarkable new metamaterials designed to deliver extreme absorption in loudspeaker enclosures.
During the extremely successful AES Virtual Vienna convention, held online during June instead of the in-person event in Vienna that had been planned, 360° audio and VR production were key themes. A panel discussion on binaural audio, chaired by Tom Ammerman, and a tutorial on “Sound for Extreme 360° Productions” given by Martin Rieger (see the AES Live link in this Inside Track), are summarized. In this feature, Francis Rumsey brings out some of the key themes of these discussions, including work with commercial spatialization tools, personalization, reality versus believability, and whether there is a perfect 3D microphone.
A number of significant challenges arise when attempting to engineer audio systems and processes for extended reality applications. Authors of papers presented at the recent AVAR conference have begun to find ways of representing the acoustics of virtual environments more accurately, such that objects, characters, and participants within them perceive sounds in a more believable way. There’s interesting evidence that the more accurately one renders the acoustics, the less bothered people are about the differences between real and virtual sounds. There’s also the interesting problem of the competition for attention in mixed reality environments crowded with stimuli that the user may need to know about.