Each month an industry expert highlights a topic of importance to the AES community. Listen, Learn, and Connect with advances in technology and best practices in audio.
Sound Field Control
Sound field control can be broadly interpreted as the process of creating a set of loudspeaker signals to create a certain listening experience over a listening area. The desired sound fields may be physically or perceptually defined, and can cover a wide range of use-cases. These can range from delivering a uniform listening experience to multiple listeners over a wide area, to delivering a personalized listening experience to an individual listener inhabiting a small zone.
The papers selected here are a starting point for anybody wishing to explore these topics. There are broadly two themes. The first five papers give a flavor of research in (higher-order) Ambisonics (HOA) and Wave Field Synthesis (WFS). HOA synthesizes a sound field based on expanding basis functions around a listener, and while it was once the domain of enthusiasts with large loudspeaker arrays, it has found a natural home in audio for virtual reality (usually delivered via headphones, at present). On the other hand, WFS synthesizes a sound field based on reproducing a wavefront emerging from the loudspeakers at the boundary (but is still, to the author’s knowledge, the domain of enthusiasts with large loudspeaker arrays). The fifth paper includes a comparison of WFS and HOA. The external links tab includes a review article discussing spatial sound delivery over loudspeakers in the depth that it deserves, far beyond what is possible here, and a link to the sound field synthesis toolbox.
The second theme is that of personal audio (or “sound zones”), where the aim for reproduction is to deliver unique personalized audio content to multiple listeners sharing the same room. Mark Poletti’s important paper describing a least-squares approach is included first. Beyond this, the research targeted at an AES audience has tended to be application-driven — the papers included here describe implementations of sound zone algorithms in a reverberant room, car, and mobile device. The final two papers describe work towards understanding and modelling the listening experience of inhabiting a sound zone. As above, a review article and tutorial slides listed in the external links tab provide a more detailed introduction to the relevant literature.
The University of Surrey hosted two AES International Conferences on the topic of sound field control (in 2013 and 2016); readers are also encouraged to peruse the JAES feature articles and conference proceedings listed below, to get a sense of the state-of-the-art in sound field control engineering and perception.
Curator: Philip Coleman
Philip is a Lecturer in Audio at the Institute of Sound Recording, University of Surrey, UK. Previously, he worked in the Centre for Vision, Speech and Signal Processing (University of Surrey) as a Research Fellow on the project S3A: Future spatial audio for an immersive listening experience at home. He received a Ph.D. degree in 2014 on the topic of loudspeaker array processing for personal audio (University of Surrey), as part of the perceptually optimized sound zones (POSZ) project. His research interests are broadly in the domain of engineering and perception of 3D spatial audio, including object-based audio, immersive reverberation, sound field control, loudspeaker and microphone array processing, and enabling new user experiences in spatial audio.