It’s somewhat disconcerting to read a lot of articles and journals about audio engineering and music production without a clear distinction in exemplary and practical terms, the inherent and symbiotic relationship between frequency (in its relation to sound), the phase coherence of sound and its melodic and harmonic properties.
Nikola Tesla once said: ‘If you want to find the secrets of the universe, think in terms of Energy, Frequency and Vibration’. This quote is both interesting and powerful to me (and should be to you too) because music encompasses all these three elements mentioned in this quote. Sound in itself is energy… which is almost always manifest as a result of resonance (or a simulation of it when working digitally) which is vibration. A guitar, violin or kologo string(s) when plucked or picked, or a drum head when hit or a tambourine when shaken or struck causes a vibration or resonance of or within the elements in space which excites frequencies and we perceive or hear it as sound (or music).
Sound is perceived through our sense of hearing frequency. This discussion will have to be broken down to several parts as its scope is vast to fathom in a brief article I’ve deliberately wanted to make easy to read and digest without any condescending technicalities; to trim the fat as I like to put it, without leaving any of the ‘everyday-relatable-elements’ we fail to catch up in space. It’s nearly impossible to discuss artistic or scientific concepts (which are one really) without going back to some form of science. I’ll try to make this a little more exciting than sitting in a boring science class… someone cover me, I’m about to go in…
We need to understand that sound (and color) exist in frequencies way beyond our senses of hearing and sight can handle. The scope of the human ear perceives low (bass) from 20hz – 20khz (treble). This is by no means the only scope of sound that exists; but, that which is perceived by the most sensitive ears. This means most persons can’t even perceive or hear this scope or spectrum of sound. Let’s dive a little deeper. Sound has several properties: loudness, pitch, and timbre.
Loudness depends on the amplitude, or height of sound waves. The greater the amplitude, the louder the sound perceived and vice versa. Amplitude is measured in decibels.
Pitch is the quality that makes it possible to judge sounds as “higher” and “lower” in the sense associated with musical melodies. Pitch can be determined only in sounds that have a frequency that is clear and stable enough to distinguish from noise.
Timbre is what makes a particular musical sound have a different sound from another, even when they have the same pitch and loudness. For instance, it is the difference in sound between a guitar and a piano playing the same note at the same volume. Both instruments can sound equally tuned in relation to each other as they play the same note, and while playing at the same amplitude level each instrument will still sound distinct with its own unique tone color.
Now, to understand all the above stated properties, we will have to introduce: Frequency. Frequency is the rate per second of a vibration constituting a wave, either in a material (as in sound waves), or in an electromagnetic field (as in radio waves and light).
Now, let’s focus on Sound Frequency.
Frequency is the speed of the vibration produced when particles are excited as a result of touch, striking, hitting, plucking, etc. an object (such as clapping your hands, throwing a stone or playing a musical instrument) and this determines the pitch of the sound. It is only useful or meaningful for musical sounds, where there is a strongly regular waveform. Frequency is measured as the number of wave cycles that occur in one second. The unit of frequency measurement is Hertz (Hz for short).
In audio engineering and music production, an equalizer is the circuit or equipment used to achieve equalization. Equalizers (in all their styles, function and flavors) are typically used to manipulate sound by affecting the properties of sound; specifically, the frequency of the sound. Since equalizers, adjust the amplitude of audio signals at particular frequencies, they are, in other words, frequency-specific volume knobs.
One of the main reasons for writing this article is to have people look at equalization in a different way than they’re very likely accustomed to. To use it as a means to not only reduce or boost frequencies but to Track Pitch and use it as a creative (and musical) way to track the pitch of an audio signal and have your boosts and cuts move musically to it to enhance a piece, or element of music. Think about this as painting a picture (a song or music performance) with color (tone and pitch) to negotiate harmony, contrast or a bit of the two in a musical piece.
Image: Instrument Frequency Chart courtesy of LANDR.
Note: The Keys and their corresponding frequencies and suggested range of instruments’ notes
An observation I’ve had for years about Bob Marley’s recordings… Notice how round, full, musical, punchy (I hate these adjectives as descriptors as they can be somewhat vague); yet, melodic the kicks tend to be?. I realized that it came from a great choice of players (and tuning and technique), mic choice and most importantly- the placement of said microphones and also the use of equalizers to sculpt not just using frequencies but specifically the use of equalizer notches and bells (very obvious in analogue from my experience) to accentuate specific harmonic content to enhance the performance and sound and not just about creating a space for elements in a song by manipulating frequencies as typically applied or suggested in most audio engineering journals or literature but making these same decisions of boosts and cuts to amplify or reduce harmonic content and pitches to bring the songs to life.
To illustrate this point, listen to ‘Zimbabwe’ by Bob Marley and hear the sound of the kick and snare; also, it’s of importance to note that the ARRANGEMENT (more about this later) of a piece of music plays an important aspect to pulling this off as arrangement determines which elements (and by proxy, the frequencies) that will interact (or not) at any given time which may or may not give ample room or space for these to be expressed.
In the example stated above, this gives us a kick sound which has a very obvious melodic ‘key’ which stands out, yet, sits and works with all other elements such as the snare (also melodic) and percussive elements and the bass in a tightly integrated, streamlined, yet effective sequence- not much different from the workings of the intricate system in a properly designed mechanical watch with the movements precision calculated when wound.
You hear the contrast, harmony and co-existence in a balanced and musical manner that is hard to hear in a lot of music today because of a shift in approach to thinking about frequencies; which, in itself, is perfectly suited to certain music etc., but we need to critically look at the relationship between the technical decisions we make using our tools of trade and how symbiotic these decisions are to the musicality of what we put out, and, how the music ultimately affects the senses of the listener.
Also, Top Rankin’ and Running Away by Bob Marley illustrates this phenomenon very clearly.
Please Listen to Zimbabwe, Top Rankin’ and Running Away below:
This multiple-part article is only meant to be used as a guide for experimentation. In my experience, what works for one thing might not necessarily be ideal for another, or work for it. The only rules are ‘there are no rules’, BUT LEARN and MASTER the rules and bend said rules to suit your intent, vision and pursuits. Equally as important, is the need for critical listening and ear training and trusting your ears!. Have a great weekend!.
To be Continued…
By Kofi ‘iambeatmenace’ Boachie-Ansah
(Twitter & Instagram: @iambeatmenace)