I’ve often thought that the way we engineer live sound must seem quite odd to non-engineers. We use a multitude of different microphones to capture the various sounds being produced on stage, which we carefully place not just to isolate the individual sounds but also to remove as much of the natural reverberation of the room as possible. We then spend a great deal of time and effort subtly manipulating and combining those sounds, which we then coat in artificial reverberation so that the resulting mix sounds natural to our ears.
What an odd way of working – wouldn’t it be much more logical to simply deploy a pair of microphones at the front of the stage so that the sound could be relayed to the audience as if you were standing in the sweet spot right in front of the ensemble?
One of the problems with this idea is that while microphones are like our ears, in the sense that they transduce vibrations into electrical signals, no microphone exists that can match the dynamic range and frequency response of the average human ear. But even if microphones existed that were as good as our ears, deploying them in a matched pair in front of an ensemble would, of course, rob us of the ability to mix the music in real time – not to mention making it extremely difficult to provide any kind of meaningful fold-back for the musicians.
The presence of fold-back speakers is one of the key differences between studio recording and live sound reinforcement; this has a dramatic effect on our microphone choices due to the spectre of feedback. That’s why dynamic microphones are so commonly used on stage – we’ve learnt to harness and exploit their limited frequency response to maximise gain before feedback. The mechanism of dynamic microphones also give us a helping hand due to the requirement of overcoming the inertia of the coil assembly, which provides a fast-acting compression that helps smooth out the transient response and enables the handling of much higher sound pressure levels.
It’s interesting to note that this inherent compression acts in a similar way to the natural compression resulting from the narrowing of the ear canal, which is why dynamic microphones sound so pleasing to us, despite their obvious flaws (and explains why I’ve never been satisfied with the results of using electret condenser microphones on drums).
Thus condenser microphones have traditionally been reserved for instruments not often required to be loud in the fold-back, such as cymbals, drum overheads, pianos and percussion. However, due to innovations in their design, condenser microphones are becoming much more common on stage, aided in a big way by the increasingly widespread use of in-ear monitoring.
One key area in which they’re making in-roads is on vocal applications; an area previously dominated by dynamic classics such as the Shure SM58 is being challenged by offerings by all the leading manufacturers. The improvements in clarity are obvious – very few singers who I’ve tried handheld condensers on have asked for their dynamic microphone back, plus they exhibit impressive feedback rejection, which enables their use in even the most challenging of acoustic environments.
Microphone choices are a very personal thing, their use inextricably tied to the planned processing and the intended role in the final mix. I’ve always been a strong believer that if you get the right microphone and put it in the right position you shouldn’t need to apply much EQ to get the sound you want. However, it can be quite difficult to avoid the habitual use of particular microphones; you get used to the results gained from particular models, so tend to reach for what’s familiar. This is why I will always try to find the time to experiment with non-standard microphone choices (and positioning). I always enjoy that moment when you’re a week or so into a tour and you’ve developed an efficient daily routine and have the time to try a few things out; the results are invariably surprising and insightful.
One final piece of advice: a solid understanding of the difference between phase and polarity can be invaluable in the use of multiple microphones on stage. While everyone should already be aware that when you deploy two microphones on the opposite sides of an instrument, such as the top and bottom of a snare, the signals captured will be of opposing polarity so the polarity invert switch should be used. A lot of people do the same thing when dual-miking the kick drum and are surprised when the resulting comb filtering sucks away all the bottom end. In this instance there’s a timing difference in the two signals, which is not necessarily corrected by inverting the polarity of one of them.
However, thanks to the prevalence of digital desks and their ability to delay individual outputs, a better solution is now available. A delay of just 1ms on the microphone closest to the source will time-align the two signals and obviate any destructive interference, resulting in a better sound when the two signals are combined – this works particularly well if the two microphones are about 34cm apart.
The use of mics is the first vital step in the signal chain so it’s crucial to give it some thought and make appropriate choices. The key, as with many aspects of live sound, is to trust your ears.
Andy Coules is a sound engineer and audio educator who has toured the world with a diverse array of acts in a wide range of genres.