DSP Training: Think Before You Click
One of the key elements of meeting room audio today is Digital Signal Processing or DSP for short. Whether DSP is an invisible feature of the microphone, a stand-alone box, or software running on a PC, DSP plays a big part in determining whether speech sounds intelligible and natural.
The prevalence of DSP in today’s meeting and collaboration environments has caused AV installers and IT technicians alike to add DSP training to their skillset. In some cases the focus is on how to configure DSP processing blocks in a particular brand or model of DSP unit, such as the IntelliMix P300 audio conferencing processor. But even before diving in and clicking on processors in your hardware or software, it’s important to have a solid understanding of how these audio processing blocks work and why they are used.
The Big Five types of audio DSP are automatic mixing, automatic gain control, echo cancellation, noise reduction, and delay.
How Automatic Mixing Works
Automatic mixing is the process of selecting the best microphone in the room for a particular talker. This is important because the closest microphone will pick up the clearest sound, and having just one microphone on at a time reduces noise and reverberation. While the impact of these are generally not noticeable to people in the same room, they are often very distracting to listeners on the far end of the call.
What Automatic Gain Control Does
Automatic gain control (or AGC) adjusts the level of each microphone (or of the incoming far site audio) to ensure consistent volume. The AGC turns quiet talkers up and turns loud talkers down, which compensates for variances in distance between the talker and the microphone.
How Echo Cancellation Eliminates Echo
Acoustic echo cancellation (AEC) eliminates the annoying echo that can be occur in a videoconference. It’s caused by the sound coming out of a loudspeaker is picked up by a microphone and transmitted back to the originator, who hears a delayed echo of themselves. While most videoconferencing applications (like Microsoft Teams, Zoom, etc.) have a single-channel AEC built in, it is usually inadequate for use with multiple microphones in meeting rooms and classrooms. The ideal situation is to have a separate AEC dedicated to each microphone channel.
Electronic noise reduction (sometimes abbreviated as NR) filters out the steady background noise that is present in most meeting rooms. This can come from HVAC systems, equipment like projectors or computers, or traffic or environmental noise seeping in from outside. A DSP with noise reduction can digitally remove this sort of noise to a surprising degree, making even longer videoconferences comfortable for listeners.
Delay synchronizes the audio signal with the video. The video takes more time to be processed before it can be transmitted over the network, so the audio of someone speaking might be heard before their lips move on the screen. An adjustable delay in the DSP takes care of this mis-match so that audio and video are aligned.
Understanding the basic principles of digital signal processors is essential to ensure effective implementation and deployment. From understanding the fundamentals of DSP to staying up to date with the latest advancements in DSP, knowledge of how DSP works can improve the quality of videoconferences and online meetings for all participants.
For more information about audio DSP, check out these other helpful posts and videos:
- Why Digital Signal Processing is a Game Changer for Audio Conferencing Hardware
- Audio Processing - Why Your Video Conference Can’t Afford to be Without It
- Why Automatic Mixing is Crucial for Conferencing
Want to dig into more DSP Training? Visit https://www.shure.com/en-US/support/shure-audio-institute/online-training