Effectively Managing Breaths in Music Production 🚀
- Romain Raynal
- Jun 8, 2023
- 4 min read
Updated: Mar 27
After the article on voice processing with Melodyne and Auto-Tune, let’s focus on another tedious task: processing breaths. Let’s try to understand why this is sometimes necessary and how to make life easier!

Why Managing Breaths in Music Production is Essential?
Why would we want to get rid of a singer’s breaths? After all, they are natural, and completely removing them could ruin the performance and make it sound artificial. Don’t worry; the goal here is more about learning how to control them.
Sometimes, the performer may not control their breaths very well, and they stand out “too much” compared to the rest of the performance. It’s also possible that strong compression – by raising the level of the quieter passages – can bring them out. It can also be an aesthetic choice.
Let’s see how we can approach this task to make it less overwhelming. And let’s take a look at plugins that could help make things easier.
The Manual Method: 🔧
Just like with voice processing in Melodyne, although it’s more tedious, the result is more natural when done on a case-by-case basis. Let’s take a look at how to do it in Cubase.
The first step is to visually identify the breaths in the waveform, so you don’t have to listen to everything, saving you a lot of time.
To do this, enlarge the track size and also the waveform display size. Now, you can more easily spot the breaths. They are smaller than the rest of the signal and tend to have a shape similar to Mont-Saint-Michel!

⚠️ Be careful not to increase the audio signal gain, but only the visualization. If you’re unsure how to do this, refer to your software’s manual.
Now that you’ve spotted them, you’ll need to process them. From my experience, for the lead vocal, I only reduce the gain by a few dB, while for the background vocals, I simply remove them. This keeps the lead vocal natural (but controlled) and prevents the multiple breaths in the background vocals from piling up.
Lead Vocal: 🗣️
For each breath, you need to isolate it and lower its gain. Every software has its own specifics. This is the simplest method, but also the longest. Here’s how to do it in Cubase:


Let’s take a look at how, thanks to Cubase macros, I was able to automate this task. The goal is – using the selection tool – to split the event to isolate the breath. Then, each line of the macro reduces the volume by 1dB. Adjust this depending on the context. For me, -12dB works in most cases.

Background Vocals: 👥
As I mentioned earlier, for background vocals, I prefer to simply delete them. Especially if there are a lot of them, this avoids the overlapping (often out-of-sync) breaths. Of course, this depends on the context, and sometimes I decide to leave a few.

Miracle Plugins? ✨
Plugin developers have looked into this issue, and today there are a few tools on the market, more or less effective. Most of them are intended for post-production, so their use in music production isn’t always optimized. Let’s take a look at two of them:
Breath Threshold : DeBreath stores a database of breath models. The scale goes from 1 to 100, and the higher the value, the more likely the event is a breath.
Reduction : This option controls the level of reduction applied to detected breaths.
Energy Threshold : This option evaluates an audio event based on its energy. If the event’s energy is below the threshold, it’s more likely to be a breath.
Fade in : This controls how quickly the plugin reduces the intensity of the detected breaths. A shorter attack time will result in faster reduction, while a longer attack time will provide a smoother, more gradual reduction.
Fade Out : This option determines how long the plugin reduces the intensity of the breaths after detection. A shorter release time reduces the intensity more quickly, while a longer release time prolongs the reduction.
Room tone : he ambient sound replaces the removed or attenuated breath with a small amount of white noise. This eliminates unwanted gaps in the track and makes breath removal sound much more natural.
Target Level : Sets the desired breath level in the final mix. When the plugin detects a breath, it automatically reduces it to reach the specified target level.
Sensitivity : Controls the plugin’s responsiveness to detected breaths. Higher sensitivity means the plugin will detect and reduce more breaths, while lower sensitivity will reduce fewer breaths..
As you can see, Waves’ plugin is much more comprehensive. In practice, this results in a more natural and less destructive outcome. In fact, iZotope’s plugin – in addition to frequently “missing” breath detections – tends to quickly introduce unwanted sound artifacts.
However, even though DeBreath by Waves is very effective, some words are sometimes mistakenly detected as breaths (and vice versa), which often requires creating automations to “follow” the performer’s interpretation.
As you can see, in a musical context, it’s faster and more effective to manually process breaths. The result is much more natural, and with this guide and a little practice, it shouldn’t take up too much of your time. 🚀
I hope that through these lines, you now have a better understanding of managing breaths in music production. I also invite you to read other articles that will provide you with tips to evolve your music production.
Feel free to comment and share on your social networks, and stay tuned – more articles are coming soon! 🙏
Comments