Each month an industry expert highlights a topic of importance to the AES community. Listen, Learn, and Connect with advances in technology and best practices in audio.
Music Production for Emerging Audio Formats
Music production is changing. There have never been so many diverse avenues for creating and consuming music, both artistically and technologically. The world of recorded music has exploded to incorporate hi-res, streaming, spatial audio, binaural sound, interactive music, and object-based audio Our production tools and methods have evolved accordingly, sometimes driving new practices, and sometimes responding to innovative creative ideas.
To give examples, modern music production facilities nowadays include both high-end hybrid studio setups and advanced software-focused laptop systems, so the potential (and necessary) skillset of a studio engineer is extremely broad. Playback formats include multi-dimensional loudspeaker systems, from standard 5.1 or 7.1 surround to 3-dimensional spatial formats such as 22.2, Auro-3D and Dolby Atmos. Equally, these formats can nowadays be encoded using binaural algorithms to give a simulated 3D experience with standard two channel headphones. Furthermore, object-based audio brings new opportunities for interactivity with music, allowing components of sound to be manipulated in the playback device, enabling the user to modify content and audio stem levels in a mobile app for example, or for algorithms to automatically adapt, loop and reconstruct music dependent on different broadcast, gameplay or virtual reality scenarios.
The new diversity of music applications and commercial formats naturally requires music production and studio engineering approaches to evolve, particularly in the areas of recording, mixing and mastering, with careful consideration of the listener experience. It’s no surprise that cutting-edge discussions between creative and technical experts in these areas are facilitated at many presentations, workshop and tutorial events at AES conventions and conferences. Listed below are a number of papers and articles from the AES E-Library, and other resources that touch on the new methods of music production for emerging audio formats, though this field is evolving rapidly and the number of AES resources in this area will certainly continue to grow.
(Click on the tabs for E-Library, AES Standards, etc, to see the different resources in each section.)
Curator: Rob Toulson
Rob Toulson is Professor of Creative Industries: Commercial Music at the University of Westminster, London. He is a research leader in the field of commercial music and he has collaborated with many international organisations in the music industry. Rob is an innovative music producer and studio engineer working across most musical genres. In particular, he has worked as recording, mixing and mastering engineer on a number of albums for Mediaeval Baebes, who have previously topped the UK classical album chart. He also developed and co-produced the ground-breaking Red Planet EP and accompanying iPhone music app for artist Daisy and The Dark.
Rob's research covers both creative and technical fields. He is Co-Chair of the Innovation in Music Conference, an active participant in the Recording and Production track at AES Conventions, founder of the Cambridge regional branch of the AES, and a former committee member of the AES British Section. Rob is co-author of "Fast and Effective Embedded Systems Design: applying the ARM mbed" published by Newnes. He is also inventor of the novel iDrumTune iPhone App, which assists percussionists with drum tuning.
This month Francis Rumsey interviews Rod Selfridge about his paper in the July/August AES Journal, "Creating Real-Time Aeroacoustic Sound Effects Using Physically Informed Models.” Rod explains that aeroacoustic effects are those created when air moves around an object, such as a sword swishing. Click these links to other video clips he’s done: propeller sound, propeller demo, Aeolian Bullroarer.
(We hope you’ll forgive the “aeroacoustic” snake-hissing effects that occur occasionally in the background to this interview, which seem to be an artefact of the teleconferencing system we have to use to make these interviews possible. That system also uses heavy audio data reduction, so the sound quality can leave something to be desired at times.)