Phase: What is it and why is it so important?
Subscribe
X

Subscribe to Mixdown Magazine

26.08.2024

Phase: What is it and why is it so important?

Microphone phase
Words by Lewis Noke Edwards

The complete guide on phase with a look at how to avoid cancellation.

Every sound you hear, whether that be recorded music, birds singing, the low hum of traffic, or the jarring hammer of an oblivious neighbour at 7am on a Sunday, it is a complex concoction of what we call frequencies. ‘Frequencies’ refer to how frequently a sound completes the full cycle of a sound wave within a second. The unit of measurement for cycles per second is Hertz, abbreviated to Hz. A sine wave, for example, is a tone produced when one single frequency is played, but in reality the sounds we hear are far more complex than a single frequency. Sounds tend to resonate within themselves, containing harmonics, or octaves, which are double or half the frequency of the root sound.

What you need to know:

  • Phase is largely the concept of thinking about how sound waves move through the air, or a signal path, and ensuring they are not cancelling each other out by pushing and pulling a microphone in opposite directions resulting in null.
  • Phase is important to ensure that any information that is lost on the trip to our ears is intentionally lost to create an accurate recording of your source.
  • While it’s an inarguably important aspect of recording, once you’ve learned how it works you can absolutely break the rules and mess with it.

Read all the latest features, columns and more here.

What is phase?

The tympanic membrane in our ear receives these sound waves and it vibrates back and forth to send signal to our brains to perceive sound. Speaker cones and microphones work in much the same way, but in opposite ways to each other. Speakers, for example, receive signal from a source (having been amplified) and vibrate the cone back and forth (according to the frequency information received) to produce a sound. For example, a 60Hz sine wave will push and pull the speaker cone back and forth (completing one cycle) 60 times per second in order to reproduce the sine wave successfully. If we introduce a 48Hz sine wave, the speaker will be forced to produce a sound wave that is both 48Hz and 60Hz at the same time, and some information will be lost because the speaker is trying to push and pull at different rates to reproduce the sine waves.

So what if we feed the speaker two 60Hz sine waves, but one is pushing its cycle while the other pulls it? The speaker will not move at all because it is simultaneously being pushed and pulled by the same amount of energy, so no sound will be produced. This means that the two sine waves are perfectly out of phase with one another, meaning they’re at opposite stages of their cycle. While not always such a perfect example in the real world, sources with multiple sources at multiple distances can cause phase cancellation (i.e. information being lost because the speaker cone is being pushed and pulled with equal force) resulting in no movement and therefore no sound. Microphones, on the other hand, ‘hear’ sound via their diaphragm and send the signal to the output of the mic, while speakers receive signal and send the vibrations to the speaker cone to be heard by our ears, which are then receiving sound via vibrations to our tympanic membrane and… you get it.

Why does phase matter?

Phew! Now that we understand what phase is, and how it can affect the sound we record, playback, and hear, why does it matter? Well, in order to make accurate recordings of sources, it is important that any information that is lost on the trip to our ears is intentionally lost. Take a drum kit as an example, and think about how many drums need to be mic’d up to get a full picture of the kit, as well as the room itself. The kick drum has a microphone on it, say two to three inches from the outside of the head, while the room mics, anything from 6ft to 10m and beyond away, are also picking up the kick drum, but that low end ‘thump’ is taking a heck of a lot longer to reach the room mics (and therefore more cycles being completed) than the kick drum mic is. The kick drum mic is also picking up the snare mic, albeit a bit muffled as the snare ‘crack’ is obscured by the kick drum itself, but the snare sound is taking an extra millisecond to reach the kick drum mic. Hang on – a millisecond? Aren’t cycles measured in cycles per second? Correct! Because the sound of the snare drum that’s picked up by the kick drum mic is slightly delayed, it’ll be moving through its cycle slightly after the snare drum mic will be, resulting in the kick drum mic to push and pull the microphone diaphragm in opposite directions to the snare microphone. You can reposition microphones to be slightly closer or further away to ensure the cycles are hitting different sources in a more pleasing way with minimal information lost.

Once you’ve got your kick and snare sorted, you can move onto toms, cymbals, overheads and triggers, all the while keeping those original sounds in phase with everything else. Drums are an extreme example, but as such a loud and dynamic instrument, often requiring multiple mics for more modern recordings, there’s more bleed within the mics and therefore more potential phase problems because these microphones are all hearing the same sources but at slightly delayed times. Another thing to keep in mind is direct boxes and amplifier simulators. It’s pretty common to take a dry signal of a guitar or bass, either as a safety net or to blend in clean or once re-amped. However, if we think about the source for a moment, we’ll realise that because a DI box takes a direct signal, there’s no delay in the sound from the source, whereas if we’re blending the direct signal with a mic’d guitar amp, even the microsecond that the signal takes to get from the speaker cone to the microphone (and that’s excluding the speed of light that the audio takes to get from the amplifier input, pre-amp section, power amplifier, cable to the speaker!) will slightly delay the sound, and when they’re played back, they’ll be out of phase. Guitars and basses are an interesting example to discuss, as phasers and flangers are a common modulation effect and phase can be used to create new and interesting sounds!

How can we use and navigate phase?

So if phase is a commonly used modulation effect, can we use it to create entirely new sounds? We can! Phasers and flangers create a somewhat similar sound by messing with the phase relationship of the input signal. A flanger uses time-based processing to create the swooshing, modulating sound of its output. A sample is taken and the sound is delayed by controllable intervals to change the volume, intensity and quality of the split that affect how it plays and interweaves with the original. A phaser, however, does a similar process but uses an all-pass filter instead of a time-based delay to make the output oscillate and modulate with the original. With this in mind, what’s stopping you from slightly delaying a signal to create a subtle modulating ‘warmth’ across a guitar or bass, or even an ambient room mic of an acoustic instrument? You could manually automate a filter to come and go as it plays against the original audio signal.

When messing with phase, it’s important to keep the “3:1 rule” in mind. Without getting too deep into it, the “3:1 rule” teaches us that when miking a single source with at least two microphones, the second mic should be no closer than three times the distance of the closest mic to the source. This is to eliminate as much of the phasing issues as we can, as the extra distance allows extra sound wave cycles to occur and eliminates the chance of those waves clashing. Obviously there’s exceptions to this rule, i.e. speaker cabinets are often mic’d with multiple mics at similar distances, double snare mics, double vocal mics etc., but it’s a good rule of thumb to keep front-of-mind when miking sources. With this in mind, messing with phase can extend to movements beyond just a tiny fragmented time shift of an auxiliary signal, like a flanger does.

If sound travels through air at approximately 340 metres/1100 feet a second, how can we think about this delay to our advantage? The ‘crack’ of a snare takes an extra millisecond to reach the room and ambient mics than it does to reach the snare mic that may only be an inch away. So what if we move the ambient mics back an extra millisecond in our DAW to emulate a larger room with the room mics all the way at the back? Because the delay is well and truly more than three times the distance from the snare, you’ve created a slight delay in the rooms that convincingly emulate a huge room.

Electrical Audio’s Head Engineer and audio scientist extraordinaire Steve Albini has used this to great effect in recording over the last 30 years. While having a room at Electrical Audio that’s arguably big ‘enough’, the sound of delayed room mics isn’t quite the same as an actual large room. The modulating, filtering result of delayed room mics is just another trick up the sleeve of an engineer to bring life to a recording. And if it works for drums, why wouldn’t it work for anything else? A delayed ambient mic for guitars, bass, or piano can introduce some ear candy even if it comes and goes throughout an arrangement, adding some extra space and pizzaz to a sound. But surely this is messing with the overall image that we’ve carefully crafted with all we’ve learnt about phase alignment, miking, and ratios? And yes, it does absolutely mess with sound, but isn’t that what we’re all here to do? Well, not always.

Phase can be tricky

The concept of perfection in phase is hard to achieve. Perfect phase can be a complex one and something that has only really come around recently as the tools we have for recording become more and more advanced. While phase itself can be a make or break in a recording, perfect phase focuses on the making part of a recording. Perfect phase is, again, without going too deep, the concept of recording something so precisely that every sound on the recording is in phase to the nth degree.

Take a snare drum for example, for the microphone and snare to be in perfect phase, the mic would need to be positioned so accurately that when the snare is struck, and the skin slightly indented, the microphone would need to receive the signal in its initial 180 degrees, and then when the snare skin bounces back and creates a convex shape, the sound wave reaching the microphone would align with the mic’s diaphragm which is now in its second half of the sound wave, and the diaphragm membrane would be pushed back.

For an electric guitar, as the speaker cone pushes and pulls to recreate its input, the microphone diaphragm hearing it would be pushing and pulling in unison, ensuring the signal hits the microphone perfectly and there is no break in phase at all. This means that when the sounds are all played back through speaker cones themselves, the sounds should all be pushing and pulling in unison with each other and creating arguably a more pure sound overall. This is all good and well in theory, but phase aligning something so perfectly is a time consuming prospect. Though in the race for perfection, no stone, nor sound wave is unturned.

So there you have it, we’ve started to clear away some of the confusion associated with phase. While it’s an inarguably important aspect of recording, once you’ve learned how it works you can absolutely break the rules and mess with it. Phase is largely the concept of thinking about how sound waves move through the air, or a signal path, and ensuring they are not cancelling each other out by pushing and pulling a microphone in opposite directions resulting in null.

Multi-mic’d sources (assuming all mics will be playing back at the same time) can be messy, but with a few little tricks and tips we can ensure we’re retaining the character of the original recording. Effects exist to push and pull signal to maximise the impact of phase being in or out, and they inspire new ways to think about phase and space within a room or recording.

Some trends will come and go, but the relationships between microphones, rooms and their sources are forever. It’s not a phase, mum.