Avoiding Common Audio Production Pitfalls
Music production lecturer Dustin Ragland debunks some common audio myths and offers pointers as to how to learn from them.
Don’t let widespread misconceptions about audio bog you down. Music production lecturer and certified Ableton trainer DUSTIN RAGLAND debunks some common myths – and offers pointers how to learn from them.
Young audio engineers, musicians and producers are learning to be technical artists in a field that is always in a state of flux and fragmentation. Whereas my early mistakes might have been contained to a studio session, or a barely seen club show, my students often fail with high visibility, played back on a loop in a digital age that never forgets.
My aim as a musician and educator is to encourage them through these mistakes, not past them, so they can become better audio workers on the other side.
Still, it’s easy to let yourself be distracted and your production derailed simply by doing what everyone else is. Here are a few mistakes to avoid:
Myth 1: Analog saturation is always right for digital signals.
Analog equipment and plugin emulations of analog gear are often used under the assumption that their circuits are only there to be pushed to saturation points. What doesn’t accompany this process is an understanding of the musical results of these non-linearities: the addition of complex harmonics to the original signal that can become inharmonic or dissonant; the slowing down of transients that might not fit an arrangement with dense percussive features.
Myth 2: The production process matters more than the result.
Young engineers now have immediate access to valuable techniques from both accomplished and emerging producers through online sources. However, constant awareness via social media of how productive everyone else seems to be can result in rushing through some stages of recording and sound designing. Or, conversely, it can cause you to get stalled, because a project has to be more novel or obscure than what someone else has already just done, regardless of the musical context or artist’s vision.
Myth 3: Every sound in a song can take up all the space it wants.
When synthesized electronic instruments are brought into or form the entirety of a production, individual stereo instances can occupy a massive amount of sonic space, in both frequency and dynamics. Temporal and frequency masking can occur easily between tracks, so faders get pushed up, sounds cluster, and the arrangements can become suffocating and static.
Myth 4: The language and traditions of previous generations must be received uncritically.
Whether it’s the terminology within music technology like “digital/real,” “amateur/professional,” or “producer/beatmaker,” young musicians face a world of language that defines them before they have a chance to pick their own path. It’s up to all of us to encourage more just, creative and innovative traditions of audio engineering.
As the producer and engineer Ebonie Smith says: “Most of the time I learn and gain far more from being inconvenienced than I do from getting everything I want right away.”
This is an essential mindset I hope to embody and teach to any student of music and sound: Listen to one’s mistakes as the first of many acts of listening in a musical life.
Dustin Ragland is a full-time lecturer at the Academy of Contemporary Music at the University of Oklahoma.
Words and images: Dustin Ragland
This article originally appeared in the print edition of LOUDER Music Makers.