EnginEARing Part 7: – The Frequency Chart II – Frequency and Arrangement
By Glen Stephan on July 13 2011 12:00 PM | Permalink | Author Info
In EnginEARing Part 6 we began our look at how the frequency chart can help us develop our listening skill by helping us to hear and understand certain frequency-based sonic characteristics and descriptions. Now we'll look at how that chart provides a significant, if somewhat indirect, clue as to the importance of musical arrangement and how directing our ear’s attention to that aspect of our music can often provide a key reason why some mixes work and sound better than others.
VIVA LA SIMILARITY
If we change the layout of our frequency chart slightly by eliminating the vertical space between the instruments and filling in all the instrument’s full ranges with one solid color, we might notice one very important overall fact: most of the instruments’ frequency ranges mostly overlap and share much of the same general frequency range (Fig. 1), with most of the instruments offering most of their potential sound energy somewhere roughly between about 140 Hz and 6kHz.
This makes complete sense if we think about it, as they mostly can play most of the same notes in many of the same musical octaves. Of course there are exceptions and extremes; tubas and piccolos don’t share any fundamental frequencies, for example. Include harmonic overtones, however, and the tuba can actually generate some harmonic frequencies that share much of the piccolo’s fundamental range.
As for other less extreme (and more often used) instruments, a middle C is a middle C regardless of whether it’s played on a piano, a trumpet or a guitar. The amplitude and time-domain profile of the resonances and harmonics for each instrument when that middle C is played differs; it’s these differences which change the timbre of each instrument and gives each instrument it’s own distinctive sound, but the actual frequency of the fundamental note called middle C and the frequencies of that note’s harmonics are the same regardless of which instrument is playing it.
VIVA LA DIFFERENCE
What this tells us is the fact that rarely do the various instruments in and of themselves naturally settle themselves into their own “spaces” or “roles” within the frequency spectrum. Seeing how much overlap there actually is and can be between instruments implies how such similarity can easily lead to a build up of competing sounds from instruments that if not selected, arranged and mixed properly serve to mask each other and get in each other’s way.
Our own ears can exacerbate the potential problem of frequency overload in our mixes also. When we lay out a plot of relative average human ear sensitivity by frequency over our modified chart (the pink curve in Fig.1, with the higher points in the curve representing greater ear sensitivity), we can see that we have maximum sensitivity in the 400Hz to 6kHz range. Because this range also happens to be right in the energy wheelhouse for a large number of instruments, it’s really easy to get an unwanted buildup of sound at many of these frequencies if we’re not careful about our mix arrangement and equalization. Especially common are buildups of “mud” around 250-400Hz, “honking and “tinniness” around 1kHz-2kHz or so, and “harshness” in the area of 3-5kHz (NOTE: these are all only common approximate ranges for illustration, only your own ears can tell you whether they are meaningful frequencies or not for your specific mix.)
This leads us to understand the need to use our ears in a way where we can objectively analyze whether our mix is or is not working because of an issue with the instrument arrangement, how the mix does or does not support that arrangement, or whether any frequency-related issues with the mix have as a cause something other than the mix arrangement. When critically listening to our music, how well the instruments “get along” with each other is one of the very first things we can listen to that can make or break the quality of our mixes.
One of the most common problems in mixes done by those first learning their way around do-it-yourself engineering is a lack of attention to the arrangement both in the actual playing of the musical arrangement itself and in the support of any such arrangement in the recording and the mix. Everybody wants to do their best when that red record light comes on, and often that mistakenly means that everybody wants to either play their instrument like it has the lead part all the time, or is the loudest instrument in the mix, or both.
More often than not, this results in a virtual wall of sound that, at best, lacks the clarity or definition that many would interpret as a “professional – sounding” recording. The best-sounding commercial productions are those with fairly well defined roles for each instrument and vocal in the composition, put together like pieces in a jigsaw puzzle, all different shapes and sizes, but ones that fit together in their own places.
One of the primary ways this is done is by creating an arrangement or mix in which each instrument seems to dominate one particular slice of the contiguous frequency spectrum. But as we can see from the frequency chart, the nature of the instruments simply cannot always guarantee this will happen. Even if all you have is a bass guitar, drums, lead guitar and a singer, problems often creep in between drums and bass, bass and guitar or guitar and vocals.
Then what happens when you add a second guitar, back up vocals and a piano? Or want to spice up the production with some strings or horns? Add a shaker and a synth? Or how about that B3 organ with stereo Leslie effect? It doesn’t take much before you could be learning a common definition of “mud” the hard way, by mixing too many instruments together without careful consideration in the arrangement and recording to their frequency-based roles.
It is up to us to arrange our songs so that our instruments have their own roles within the spectrum, or if we are mixing or producing someone else's arrangement, to build a mix that supports or produces such an arrangement. This is one of the first things we need to learn to listen for when producing our own musical recordings, and a key critical listening skill for anyone wanting to record hit songs with great sounding mixes.
Note that this dos NOT mean that we must try to cram each instrument into its own narrow frequency range by EQing and filtering out every other frequency outside of that narrow range. Our instruments need room for their resonances, harmonic overtones and other peculiar frequency formants to breathe to let them sound like the musical instruments they are.
But it does mean that they each need their own space(s) to breathe without being masked by something else. Or, if we have two or more instruments that are doubling or tripling each other’s lines in the same key, that they blend together and share the overall spectral space in a pleasant-sounding way.
And the first way we can check for all this is to use our new critical and analytical listening skills with an un-biased ear towards the way the musical arrangement helps our hurts our mix, and equally, how our mix supports the musical arrangement.
There are many other ways to analyze our mix with our ears other than just an ear towards frequency usage. Frequency is just one dimension of mix “space” to consider. Next time, we’ll look at all four dimensions of mix “space” as four different yet interdependent ways to develop and focus our EnginEARring skills.