AMI hears from Dirk Noy, Partner and Director of Applied Science and Engineering at WSDG, about the company’s Acoustic Simulation Lab in Basel, Switzerland…
How did the lab first come to fruition?
The room was opened in September 2018 and we invited some people over for a ceremonial launch event. We worked on it with a team here locally and it took around two to three months of construction to build.
It’s kind of a modular approach – the room was already existing so we just had to put some furniture in there along with some acoustic treatment for absorption and diffusion on the walls and ceiling. When we moved here about 18 months ago the room was left empty for a long time. It’s in the basement which is tough to use for anything else, so it got to a point where we finally decided to make use of the space.
How does the testing process work?
When running acoustic simulations, you have a dry signal like speech or piano recorded in an anechoic environment, and if you listen to it, it sounds completely anechoic with no reverb on it. This is usually used as a base file where the simulation program will add the reverberation signals as they occur per the calcs, and you create a stereo wav file that you normally listen to on headphones, although this doesn’t always work perfectly – I mean who wears headphones in a concert hall?
What really opened our eyes was when we started to play around with VR goggles. Users are placed in a virtual 3D space and as they turn their heads with headphones on, they will hear no change in the audio effect regardless of where they are looking in the space, as the headphones are basically glued to their ears. However, if you get rid of the headphones and install a number of loudspeakers, you can move your head to the right side and actually hear the reflections off the right side wall (reproduced by the loudspeaker that sits in that area). We then started testing the simulation software itself because we needed to make sure that our simulation program can actually generate the multidimensional files.
We’re using a program that can create a 5.1 simulation – which we then employ on two height levels – and it is also set up for 9.1, or rather 9.2 because we have two subs. We have a nine channel audio file being replayed from a DAW to all nine loudspeaker channels simultaneously. The program takes around two days to generate the files, because it takes a dry signal and generates all those reverb contents to it. We’ve not done it yet, but you could listen to a regular 5.1 or 7.1 mix in the lab as well if inclined to do so.
To do the simulation process correctly we’re using a simulation program called CATT-Acoustic, where you can transfer your room to a 3D model inside the software, and then dress it up with materials on each surface. From that information, if carefully entered and calibrated correctly, you can then insert the audio file into the program and it will calculate the simulation files overnight. You can pretty much use any audio file you want – up to now we’ve done small concert halls with piano, speech and a choir, as well as an ice hockey arena and a train station. You could even put a piece of machinery inside a factory hall or another functional space and see what’s happening when you bring in a sound absorption panel etc.
How exactly are you putting the lab into practice?
I’ll use the train station as an example, because we recently got a dry speech signal from the train operator here in Switzerland using the actual chime and voice that they use on the platform, and we were therefore able to mimic that space and test it acoustically.
This is not a production space but a dedicated listening lab – that is the main purpose, although it must be said that it’s not in use every week. A project has to be of a certain size to make it meaningful. In the last couple of months we’ve had three or four successful projects where some client teams have come in to look, listen and talk.
In terms of products, clients usually don’t care about this side too much, as long as it sounds good. We’re using the manufacturer’s data to produce the simulation files, so in the acoustical program we can select a loudspeaker type or model and it will of course change the sound of the space depending on the product chosen.
What do you hope to achieve with the lab in the long term?
It was a significant investment so we’re very motivated to make use of it. Let’s take the train station example again. If we have a client who is a station architect, they may have very little understanding of acoustics, so if we can let them listen to two or three acoustic treatment options with two or three loudspeakers options, we can have intelligent conversations with them about acoustical topics. They can then make a determination quite clearly about what they like and don’t like, even if they’re not familiar with the technicalities and terminology.
With the lab, we’ve tried to be simple but still comprehensive enough so that our hearing understands it as a three-dimensional sound field. I think at the moment, 9.1 is actually a good compromise in terms of representing reality – which is totally complex and has hundreds of thousands of sources – as best we can, while still being manageable in a project environment.
Because we are an international firm, we now have testing labs in New York and Berlin, which provide our global clients with these same options. The Swiss lab however was the catalyst for the original idea, and the room itself really does sound amazing.
I guess the main purpose is that it’s an application based setup that facilitates dialogue between acoustical experts and clients, audio architects and users of the space. The real value is that users can compare different degrees of acoustical treatment etc. – it makes it possible to base an educated decision on an issue that will make a meaningful improvement to the client’s environment and workflow. Apart from its role as an acoustic simulation lab, it’s a decision facilitator and dialogue enhancer!