Audio Media International can exclusively reveal that France-based Amadeus will debut a new spatial sound processor called HOLOPHONIX at the 2018 Prolight + Sound trade show in Frankfurt next week.
The HOLOPHONIX processor brings together several different spatialisation techniques including Wave Field Synthesis, High-Order Ambisonics, Distance-Based Amplitude Panning, and more, enabling placement and movement of sources in a 2D and/or 3D space.
“Development of the HOLOPHONIX system is undoubtedly the most ambitious and the most exciting project we have initiated for a very a long time,” said Michel Deluc, Amadeus’ R&D manager.
The new system allows the user to select and control a series of highly advanced 2D and 3D sound algorithms designed at IRCAM-based STMS Lab (Sciences et Technologies de la Musique et du Son), located in Paris, and supported by CNRS (National Center for Scientific Research), Sorbonne University, French Ministry of Culture and IRCAM (Institut de Recherche et de Coordination Acoustique/Musique).
Amadeus’ relationships with IRCAM started in the late 1990s and over the years, it has designed more than 339 custom loudspeakers installed within IRCAM’s variable acoustics hall (called Espace de Projection) for research on high-end sound field recreation systems, including Wave Field Synthesis 2D and Ambisonics 3D sound.
“This project brings together a plurality of talents from the most prestigious French musical, theatrical, and scientific institutions – and their experience and knowledge are as rich as they are complementary,” added Amadeus marketing manager Gaetan Byk.
“Our long-term and close relationships with the teams at Paris-based IRCAM Institute, and their trust in Amadeus for almost 15 years, has inevitably led us to get closer to integrating into the HOLOPHONIX processor a large part of their technologies related to the spatialisation of sound.
“We wanted to offer our future users a simple, intuitive and ergonomic tool, perfectly optimised considering the needs and demands in the theatrical, musical and performance fields. The cooperation of contributors from prestigious French institutions, among them, the first users and beta-testers of our spatial sound processor, was essential."
Amadeus also collaborated with several top engineers from world-renowned institutions, including Jean-Marc Harel from La Gaîté Lyrique Theater, Marc Piera from Chaillot National Theater, Dominique Bataille and Samuel Maitre from the Théâtre du Vieux-Colombier – one of the three theaters of the world-famous Comédie-Française – and Dewi Seignard from Les Champs Libres Cultural Center in Rennes.
The HOLOPHONIX processor is housed in a 3U-height chassis, machined from aluminum and anodized. Its front panel is machined in three dimensions from an aluminum block, its style drawn from the aesthetic and technical aspects of Amadeus’ audiophile Hi-Fi product development.
The hardware is fully redundant, comprising dual redundant power supply units and solid-state drives, for complete reliability. HOLOPHONIX is Dante-compatible and can integrate with standard commercial DAW software as well as Dante-enabled devices providing an added control layer.
“The technological gains offered by the Dante protocol and its widespread adoption by professionals led us to consider its implementation within our systems,” added Deluc.
Besides its standard Dante compatibility, the system can also be configured on request for MADI, RAVENNA, or AES67 formats.”
The input/output matrix of the HOLOPHONIX processor allows the user to choose the rendering mode for each of the incoming channels. It natively handles 128 inputs and 128 outputs in 24-bit/96kHz resolution, but can be extended to 256 or 384 inputs and outputs,” Deluc explains.
The processor is structured around a powerful multichannel algorithmic reverberation engine. It allows users to combine several different artificial reverberations, homogeneously combining sound materials and fine-tuning the perceived sound depth. Reflection calculators allow the user to create several virtual sound spaces. High spatial resolution impulse responses can also be inserted in the convolution engine, in order to re-compose acoustics.
“The HOLOPHONIX processor creates an extremely advanced platform which is able to mix, reverberate and spatialise sound contents played from various devices using several different spatialisation techniques in two or three dimensions,” explained Thierry Coduys, Chief Technology Officer who was intimately involved in the creation of the HOLOPHONIX processor.
The hardware offers a quasi-unlimited number of spatialisation buses, each one able to run one of the different sound algorithms available, including: Higher-Order Ambisonics (2D, 3D) Vector-Base Intensity Panning (2D, 3D), Vector-Base Amplitude Panning (2D, 3D), Wave Field Synthesis, Angular 2D, k-Nearest Neighbor, Stereo Panning, Stereo AB, Stereo XY, Binaural.
“This allows to the user achieve control of the sound sources using different techniques. For each project, the algorithms get evaluated, listened to and selected on site, according to their coherence with the main electro-acoustic system and the artistic expectations of the composer or performers,” added Coduys.
“The Binaural algorithm has been designed to help engineers and producers prepare their production using a conventional pair of headphones, giving them the experience of a full 3D image of their mix, and to design sound object trajectories. The processor also includes around a hundred head-related transfer function (HRTF) available in the SOFA file format."
The head-related transfer function (HRTF), also sometimes known as the anatomical transfer function (ATF), is a response that characterizes how an ear receives a sound from a point in space. The Audio Engineering Society (AES) has defined the SOFA file format for storing spatially oriented acoustic data like head-related transfer functions (HRTFs).
The HOLOPHONIX processor also works with show control software and many popular DAWs that are compatible with the Open Sound Control (OSC) protocol – including Ableton Live, Cubase, Digital Performer, IanniX, Logic Pro, Mandrin, Max, Nuendo, PureData, Pyramix, QLab, Reaktor, REAPER, Reason, Traktor—allowing composers to add a control layer to existing software, hardware or network systems used for original in-situ creations, such as installations or performances involving graphic, video and/or sound content.
Technical details of the algorithms the HOLOPHONIX hardware system controls reveal its power and depth:
• VBAP (Vector Base Amplitude Panning) 2D or 3D
The VBAP technology utilizes the data for each specific speaker position. It uses the three speakers closest to the desired position of the source. This approach is based on the directional component of the vectors corresponding to the two or three speakers placed closest to the sound source.
• DBAP (Distance-Based Amplitude Panning) 2D
The DBAP technology is based on amplitude panning, applied to a series of speakers. The gain applied to each speaker is calculated according to an attenuation model based on the distance between the sound source and each speaker.
• High-Order Ambisonics 3D
The Ambisonics technology also utilizes the data for each specific speaker position. It recreates an acoustic field by de-composition/re-composition based on spherical harmonics. It aims to re-build the pressure field, but only around a specific point (the ‘sweet spot’).
• WFS (Wave Field Synthesis) 2D
The WFS technology can rebuild a sound field over an extended area. It recreates a wavefront by superposing secondary sound waves radiated by a speaker network.