As the tools we use improve with the increasing accuracy of digital technology, it can become easy to fall into using them any which way we see fit.
Although the increase of options we have is a good thing, there are often many approaches that we may need to disregard even if they do seem to get the job done at first glance. I'll elaborate on a few of them briefly which apply when using simpler tools like EQ and compression.
+ EQ should always be attempted first by using mechanical means ie. adjusting tone by moving the microphone or source. This potentially has a higher resolution of change because it is made in the acoustic domain without any DSP; which would be confined to the resolution of the sample rate.
- EQing poor sounding instruments is usually an exercise destined to fail. Ringing toms can best be sorted out by having a chat with the drummer. A guitar that sounds very treble heavy may be due to a guitar tech mistakenly adding too much. A quick chinwag may solve the problem more effectively than reaching for the EQ.
- EQ shouldn't be used to highlight the sound of the microphone. When we say a microphone sounds great, we probably mean that it has certain characteristics which come together to capture the sound of a particular instrument well. This distinction is important because we want to encourage musicians to value the sound of their instruments as we try and capture them. In turn this encourages the manufacture and sales of well crafted instruments.
It should be that the instruments already have a great sound, while the responsibility of the engineer is good microphone choice and positioning, which translates sound through a system rather than colouring it – no one goes to a gig to listen to the sound of the microphone package. There will always be some colouration but this should not be the aim.
+The use of EQ can then instead be used to attenuate frequencies which are not fundamental or crucial for the source in question; increasing headroom in aspects of the system post EQ and freeing up frequency bandwidth for the mix.
- Nine times out of ten, worrying about tuning individual singers to their microphone isn't necessary. A vocal microphone correctly EQ'd should really capture the natural voice of anyone singing into it. This is where it is usually helpful to hear what the singer sounds like acoustically so that this can be recognised in the headphones.
- Using compression to correct poor microphone technique may not always be the best idea – especially earlier on in a soundcheck. When poor microphone technique is present (either through inexperience or a lapse in attention) the first response can often be to dial in the threshold and attack, and then set a ratio that deals with the peak in question. While this may solve the symptoms it may create problems further down the line – as whoever is using the microphone develops an unrealistic judgement of a microphone's dynamic response.
Deferring compression until closer to the end of a sound check means you will need to use less of it as the singer learns to self regulate their own dynamics.
+ Compression can be used to reduce the dynamic response of the microphone so that it is less sensitive to changes in movement. In effect this increases the circumference of the sound field and enables the microphone to have a wider pickup, which allows for more movement – this is most applicable to vocal applications. If you imagine an SPL plot for a speaker system that shows how sound can be focused or dispersed acoustically; the same is true but in reverse with the microphone's polar pattern. The strength of this polar pattern can be manipulated using compression.
A wider, more even sound field adapts the standard response of the microphone, which is designed mostly to avoid spill and achieve a good signal to noise ratio – while this close proximity is great for capturing the nuances of a source, sometimes all this nuance isn't helpful for the ear to appropriate each instrument spatially. The only time we hear such nuance is when we are very close to something which is playing softly, such as someone leaning in and whispering something to us.
The sound of something at a moderate to loud volume is perceived very differently as it travels further distances and mixes with other sources in the acoustic field. Compression accounts for this psychoacoustic phenomenon because the sound needs to transfer to the end listener as though the microphone is listening from an audience position. As no such microphone technology exists which can produce this result, compression can be used to account for the loss of dynamics over distance and create a much more balanced wavefront.
+ Because of the dynamic nature of microphone response at close proximity, we ideally want to have quite a lot of sensitivity during softer to moderate parts of a song, but not too sensitive for heavy parts of a song where there is lots of high energy performance. Because not everything can be loud, it's useful to have compression to keep our groups of instruments in check as the level of a song raises during peak sections like choruses – especially as the vocals will be less capable of competing acoustically. Dave Rat has mentioned this before on his YouTube channel.
- Using compression to make a signal loud enough is also a bad idea. Most of us know that it's the low levels themselves that need to be increased if a source isn't as loud as it needs to be coming into the desk – without artificially ramping up high ratios and make up gains. Compression is a dynamics tool and makes a poor gain or amplitude tool. The fact that there is a gain section on a compressor doesn't mean that it should be used as a replacement for input gain. The source may appear louder after compression but this will be an RMS increase and not a peak increase.
+ Using compression creatively to push the boundaries of an instrument's dynamic envelope is a great advantage of processing. This can be used to create a new quasi-instrument or to change an instrument beyond it's physical dimensions eg. toms into timpanis.
The fundamental principles apply whether in live or studio environments – although the choice and application of them will vary because of different priorities. Engineering live music will have more of a concern for gain before feedback and system headroom, whereas studio environments will always aim to make sure that the dynamics of the song translate well on all playback systems, including cheap laptop speakers.
There is a lot we can now do since Compression and EQ are becoming more and more advanced tools – but there are also a few things we should definitely think twice about doing just because we can.
Aston Fearon is an experienced freelance event sound engineer – specialising in mixing front-of-house – and has worked with a number of venues, PA hire and event production companies in the UK.
Do you think you have what it takes to be an Audio Pro International contributor/columnist? If so, send some information on your background in the pro audio industry, as well as some article ideas to API editor Adam Savage via firstname.lastname@example.org.
Keep up to date with the latest developments from the world of pro audio by registering for our free daily newsletter.