After being shown an impressive demonstration of the technology in London recently, Adam Savage followed up with Etienne Corteel, CTO of Sonic Emotion Labs, the company behind a new turnkey solution for 3D sound in various environments to get a more in-depth rundown.
Designed and manufactured in Paris, Wave I from Sonic Emotion Labs claims to offer a new way of introducing 3D listening within a venue using Wave Field Synthesis, delivering uniform spatial sound reinforcement for the entire audience and enabling users to easily and precisely position and move sound sources for added control via the software.
Sounds interesting, no? Then let Etienne Corteel from the firm explain further…
What are the standout features/selling points of the Wave I system?
Wave I is a combination of post processing functionalities for optimum sound field rendering in an extended listening area. It combines multiple state-of-the-art functionalities (loudspeaker management, system tuning, matricing) with unique 3D rendering in one consistent product.
What does Wave I consist of in terms of actual equipment? The hardware processor and two software interfaces?
It comes with a complete software suite for system design, system tuning, real time performance and 3D sound content production. The tools are designed to work either online or offline. Everything can be prepared in advance including content production. We offer an offline rendering software that allows 3D rendering in any conventional loudspeaker layout (from 5.1 to 22.2, including all Auro3D configurations) or through headphones using binaural technology. The transition between preparation and onsite operation is as simple as routing the output of the sequencer to an external soundcard to feed the Wave I.
What do you believe were the advantages of using Wave Field Synthesis (WFS) technology?
WFS is a robust 3D sound technology that accommodates well with various room dimensions and shapes. It is also a power efficient technology that uses one quarter to half of the speakers for any target source position. Our WFS algorithms go beyond the traditional formulation that requires a large amount of loudspeakers and is usually restricted to the horizontal plane. We have allowed the use of a limited number of loudspeakers (from five on), the extension of WFS to 3D with a small number of loudspeakers for height, and the combination of multiple arrays for optimum sound coverage.
Is this what allows you to eliminate the ‘sweet spot’ and focus the audience’s attention on the sound source (e.g. Musician) rather than the speakers?
WFS is the only 3D sound technology that enables the user to create virtual sources at a defined position in space. All other technologies provide direction control but fail at creating a proper wave front emanating from a precise position, including distance. Not just at the location of the nearest speaker. For example, WFS allows you to position the sound at the exact location of the singer and provides a coherent audio/visual experience for any seat in the audience.
What applications is it suitable for, and could you give us some examples of where the technology has already been used around the world?
Wave I has been used in multiple contexts. It applies to spatial sound reinforcement maintaining sound level coverage while providing spatial positioning of sounds onstage. We have several fixed installations in Europe (e.g. Théâtre de Chaillot, Institut du Monde Arabe, Stuttgart Schauspielhaus), more than 160 shows over four years in classical and jazz open-air festivals and an arena tour with French artist M Pokora. Wave I has also been installed in clubs, offering 3D sound rendering for DJs even with stereo inputs only. We have also multiple installations all around the world (art centres, museums and universities).
How easy is it to install and is there a lot to learn in order to get started?
Our tools have been designed in close connection with users and are recognised as self-explanatory. Creating content for Wave I can be learned in a couple of hours. Designing a setup is very simple, providing you have rough loudspeaker positioning information. The remaining aspects concerning system tuning primarily consist of applying loudspeaker presets (if available), aligning subwoofer level and adjusting classical parametric EQs to optimise sound performance.
So there are lots of possibilities with loudspeaker configurations? And is the software compatible with all the major DAWs?
Wave I can be used in many different loudspeaker configurations. All we recommend when designing a system and choosing loudspeakers is to ensure that any listener will be the in the field of at least three speakers in order to limit the audibility of individual speakers. Therefore, we recommend using loudspeakers with wide horizontal acoustic dispersion and smooth directivity characteristics. In the session at Rambert [Ed: where the London demo took place] with composer Roberto Rusconi, we used DX12 coaxial speakers from APG that provide such characteristics.
The setup may only cover the stage area, or comprise an additional surround system, support arrays and ceiling speakers for source positioning in height.
And our software is compatible with all major DAWs including Ableton Live with a custom integration.
Are there any future developments at the company that you can tell us about?
Sonic Emotion Labs is the proud co-ordinator of the Edison 3D research project that is funded by the French national agency of research (2013-2017). We gather experts (signal processing, acoustics, human machine interface, psycho-acoustics) and expert users. The goal is to develop tools that offer a consistent 3D sound production workflow between the studio and the live situation. Radio France is a key partner in this project, already using our tools in multiple live and web-based productions. They have successfully created the “cinema for your hears” events in which the audience experiences 3D mixes that are then available as online binaural productions (http://nouvoson.radiofrance.fr/).