Neon Blender "Blundle" - Buy all 3 New Items and save 15%!

Characteristics of Sound - "Phase" 2

Thanks for coming back around to continue to explore the intricacies of the noises that we hear. From here, we’re gonna explore a few more characteristics of sound. If you haven’t gone through the first part where we talk about frequency and amplitude, check it out here

Phase

Moving forward we are going to explore phase, velocity, wavelength, and harmonics. Phase is one of the more important of this selection we’ll go over. Phase is a point in a wave cycle where two sections have the same waveform. This section is measured in degrees and measures the time relationship between two or more sine waves. This is how we can use phase in audio to our benefit.  We can take two sine waves and explore their relationship together when they are in different phases. 

The phase difference between two sound waves of the same frequency moving past the same fixed location can be found by finding the difference between the same positions within the wave cycles of the two sounds. This is expressed as a fraction of one wave cycle.

 

https://dosits.org/wp-content/uploads/2017/07/Phase-1.500.png

With that being said, two sine waves of the same frequency that are perfectly aligned have a phase difference of 0 and can be referred to as “in phase”. When two sound waves that contain the same frequency (i.e. doubling a vocal track) are in phase they usually boost each other.



https://dosits.org/wp-content/uploads/2017/07/Phase3b.600.png

When one of the two sine waves of the same frequency is shifted by a one-half (½) cycle, or 180 degrees, the sound waves can be referred to as “out of phase”. Using phase to remove certain frequencies from a mix is not an uncommon practice. In fact, many times people use phase to remove instruments from a song to create a karaoke version or a backing track. In live recording, producers will use phase to remove background audio from broadcasts. Phase can be a very important tool in a young engineer’s pocket. Since phase can be anywhere between 1 and 359 degrees, it is not uncommon for there to be some “degree” of phase issues (yek yek yek). 

Most commonly an engineer will find phase problems during a live production and this is mostly due to acoustics. You never know what the conditions of your set will be the day of - the only thing you know for certain is where the stage is - sometimes. The acoustics of an environment can create cuts or boosts to certain frequencies. Metal will reflect soundwaves and boost the mids, making your sound shinier. Cloth will absorb the soundwaves and make them overall duller. Wood does a pretty good job of reflecting soundwaves honestly, meaning that what you put in you will get back. This depends on the type of wood of course - an untreated wall of 2x4s will reflect a lot different than say a treated wood roof will so keep that in mind when listening for these types of phase relations with the environment.

 

http://www.physicsclassroom.com/mmedia/waves/swf.gif

One result of acoustic phase issues we can run into is Standing Waves. While we will explore this more in a later section (lots of physics here, too) you should know that a standing wave is produced whenever two waves with the same frequencies interfere with one another while traveling in a different direction along the same medium. In terms of audio production, or even more specifically live production, the medium we are referring to is the room we are mixing in. In the instance of producing, standing waves are usually frequencies below 300 Hz. This could be anything from stage rumble to a loose cannon tom-tom. In our article about standing waves, we go further into how this can happen in a room, what to look out for when staging a room, and how to prevent standing waves from even becoming an issue. 

 


https://cdn.dpamicrophones.com/media/DPA-Images/comb-graph.jpg

The other thing you will have to look out for when dealing with phase in acoustic environments is Comb Filtering. As with standing waves, comb filtering can get complicated so we will explore more into it later. However here are some things to get you introduced to the idea - in terms of acoustic phase comb filtering happens when a sound adds to itself within a short time frame, usually 25 milliseconds (ms). In terms of acoustics, this happens when a sound reflects as it is on the way to the microphone. A second soundwave reflects off of the surface and takes the comb shape, filtering out different frequencies of the first soundwave. This becomes more of an issue when live recording in the instance of dialogue close to a surface, or in an intimate space when a person records themself on a microphone.

Stereo micing, or placing two microphones on a source, will create phase issues if not done correctly. The angle of the microphone is important here and that is something we will talk about later. For now, just know that if you hear something funny and you are using two mics, move them around until it sounds clearer and more detailed.

If you aren’t entirely sure if there is a phase issue in your mix you can check using a few methods all consoles should be able to perform at least one of:

  • Use the mono button on the mixer to send all of the channels through one speaker or set up your configuration for a mono output. This will compress both Left and Right sends to one channel and then you will be able to tell if there are any phase issues that should be addressed audibly. Otherwise, any mistakes in phase will be dramatically exposed when sent through a mono setup, whether the song has been recorded or the song is being performed live. This should be considered as often as possible as many venues will use mono systems to save some money.
  • Flipping the phase on the channel or multiple channels will allow you to see if your stereo micing technique is helping or hurting the mix. If your mic setup is correct, you should be able to flip the polarity and at least mostly cut out the source from the speakers. This can be done live and will help to show you differences in mic placement live.
  • Some consoles as well as external pieces of hardware come equipped with phase (VU) meters or you can buy one to patch in. A phase meter will get right to the point and tell you if something is in or out of phase. From here you can adjust accordingly and see your results live.

https://media.sweetwater.com/api/i/f-webp__q-82__ha-b719695a62f149ee__hmac-0f68d77752d856722bb5c935bcb7e8743aba8c2d/images/items/750/MBP2-large.jpg.auto.webp

Velocity

Whew. That was a lot, right. Phase is important, probably as important as frequency and amplitude. That’s why we won’t overload you here. There will be more to come on this though, so stay sharp on it! In the meantime, we’re moving on to Velocity.

In the sense of sound, Velocity is the speed at which sound waves travel. Sound waves travel at about 1130 ft per second in ideal conditions - that ideal condition being 68 degrees Fahrenheit (344 m/s at 20 degrees Celsius). The speed that sound travels at corresponds with the temperature of the medium it travels in. If the medium is hotter than the soundwaves will travel faster. If the medium is colder than the soundwaves will travel slower. This information is important when mixing in a live situation or when dealing with standing waves. Anyone who has mixed a show outside with live instruments will attest to the fact that temperature can change your whole set.

Wavelength

As we mentioned before, the wavelength is the length of a (sound) wave from one peak or trough to the next peak or trough (compression/refraction). You can find a wavelength that you don’t know by dividing the speed of sound by the frequency of the sound. The lower a frequency is the longer its wavelength will be. The higher the frequency is the shorter its wavelength will be but the more directional as well. We explore this a little in the frequency section of the first part of “Characteristics of Sound”.

Harmonics

Harmonics is one of the most important things to think about when it comes to creating the mood, or the timbre of your mix. Harmonics can be thought of as accompanying tones when it comes to soundwaves, creating richness and character in a sound. This relationship starts with a fundamental frequency. 

When using an oscillator to hear a fundamental frequency, it will show the frequency on an oscilloscope as a sine wave (if one is attached to the console/built into the plugin), broken down to its rawest form. Otherwise, it will emit a tone through the speakers of the set frequency in its most basic form. Going here, you can play with an online tone generator to get an idea of what these fundamental frequencies sound like. You can choose the frequency (Hz) or the actual note next to the frequency selector on the right (A4, B2, C#6). Feel free to step away for a second and come back once you're satisfied.

Fundamental frequencies hardly ever come unaccompanied. Soundwaves come with harmonic frequencies which create the timbre we talked about previously. The spectrum of combinations that come from different materials and through variations of direction is how instruments were created. Instruments that sound smooth and more mellow have less harmonic information and the fundamental frequency is more apparent. Think of instruments like the flute, clarinet, or the xylophone. Instruments that sound edgier and more harsh have more harmonic information and the fundamental frequency is less apparent. Think of instruments like the tuba,trombone, or mellophone.

  • Note: You can figure out the harmonics associated with a fundamental frequency by multiplying the fundamental times 2, 3, 4, etc. If you play Low E on a bass guitar (E1) the fundamental note would be about 41 Hz. 
    • Fundamental note E1 = 41 Hz
    • Second Harmonic 82 Hz (41 x 2)
    • Third Harmonic 123 Hz (41 x 3)
    • Fourth Harmonic 164 Hz (41 x 4)

Harmonics are important because they are essential when it comes to actually mixing. Engineers often will boost harmonics instead of the fundamental frequency when trying to set a certain mood or create a desired tone. Instead of boosting 300 Hz to hear more “bass” from your bass guitar, you could try boosting around 900 Hz to bring more sound out of the neck, making the notes pop more while avoiding the destructive low-end frequencies. 

Harmonics are broken into evens and odds when it comes to things like synthesizers. This isn’t saying that E3 is an odd harmonic - remember harmonics are the tones associated with a fundamental frequency. Even harmonics are smoother and easier on the listener's ear. Odd harmonics create an edgier tone, making the listener feel edgy. Pieces of musical hardware that use vacuum tubes, like microphone preamps and amplifiers, take advantage of both even and odd harmonics. We’ll go into how vacuum tubes work later, but for now, if you are curious here is a good video about them.

From here we will continue to explore the rest of the characteristics of sound. Go ahead and take a second to explore harmonics and phase again, making sure you fully grasped those concepts. We cannot reiterate how important these two things are. Make sure to go over frequency and amplitude again in your head while you are doing this. If you need a refresher, you can revisit the first characteristics of sound.

 

 

 

 

--- Sources ---