Go Back   Cockos Incorporated Forums > REAPER Forums > REAPER Q&A, Tips, Tricks and Howto

Reply
 
Thread Tools Display Modes
Old 05-15-2018, 10:15 PM   #1
sjs94704
Human being with feelings
 
Join Date: Mar 2017
Location: Berkeley, CA USA
Posts: 1,336
Default Normalizing Vocal tracks -- Should I or shouln't I? And why to you say that?

Title says all I wanted to ask .......
__________________
Bayside Studios, Berkeley, CA - Music That Brings People Together
Steven Schuyler, Singer
sjs94704 is offline   Reply With Quote
Old 05-15-2018, 11:23 PM   #2
nolman
Human being with feelings
 
nolman's Avatar
 
Join Date: Feb 2008
Location: ghent, Belgium
Posts: 336
Default

It's the same as moving the fader so I wouldn't know why not.
nolman is offline   Reply With Quote
Old 05-16-2018, 12:19 AM   #3
Eliseat
Human being with feelings
 
Eliseat's Avatar
 
Join Date: Mar 2018
Location: Cologne
Posts: 1,362
Default

If you mean normalizing to 0db then of course no!

Why? Because you can run into clipping problems if you use FX which increase the level. Those clippings don't appear in your mix, but they could show up in the final rendering as crackling.

So you maybe think: Oh, that's no problem as long as I move the mixer slider down. But you should avoid putting your mixer slider into extreme settings because the more you move away from the 0db marking, the more db get increases or decreased. So fine tuning gets more difficult. On the other hand clipping also can happen before your mixer fader - right in between your fx chain - what could end up with crackling even if your mixer fader is down.

Its a better way to let your mixer sliders at 0 db and "normalize" all media items and virtual instruments to about -18db (or even lower). This makes sure they will not add up into a way to hot master channel and it will give you enough room to use fx without fearing clipping. I even go a step further and make a rough mix just by leveling the instruments before the mixer fader. Only in the final mixing I start moving faders.

That's my way. There many others.

Greetings from Cologne

Eliseat is offline   Reply With Quote
Old 05-16-2018, 01:57 AM   #4
Valle
Human being with feelings
 
Valle's Avatar
 
Join Date: Nov 2007
Location: Sweden
Posts: 808
Default

If you need to normalize (a track), normalize. If you don't need to normalize, then leave it as is. Back in the 16 bit days, there may have been reasons not to normalize (some) tracks. Nowadays, it's just a gain up/down thing.

I don't follow the -18dBFS "rule", and I like to normalize all my tracks around -5dB to -3dB. For one, I can better visually follow the action in the audio, and plugins on similar sources will process more consistently.

Gain staging is the key word.
__________________
Valenzia Vision
Valle is offline   Reply With Quote
Old 05-16-2018, 02:53 AM   #5
uksnowy
Human being with feelings
 
uksnowy's Avatar
 
Join Date: Feb 2008
Location: 6950 DK
Posts: 659
Default

I would argue that you shouldn't. Vocals can be and often are very dynamic. I have a song where the girl is singing some lines relatively quiet compared to a huge full on chorus. If I normalise her vocals that expression is all but lost.
__________________
REAPING HAVOC SINCE 2008
uksnowy is offline   Reply With Quote
Old 05-16-2018, 03:51 AM   #6
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by uksnowy View Post
I would argue that you shouldn't. Vocals can be and often are very dynamic. I have a song where the girl is singing some lines relatively quiet compared to a huge full on chorus. If I normalise her vocals that expression is all but lost.
Normalizing doesn't change the dynamics at all. It just applies a gain change so that the highest peak is set to a given value. No expression is lost.

I think Valle summed it up really; if you need to normalise, do it, if you don't then don't bother. I know that sounds a bit obtuse, but if you don't find yourself thinking "this would be so much easier if these audio files had the same peak level", then don't worry about it. Back in the days of fixed-point DAW internal processing, you could run into problems by normalising, but these days it just doesn't matter one way or another.

As for seeing waveforms more clearly, you can do that non-destructively by using the zoom waveform action (shift + up arrow is default, I believe).

It's also worth remembering that peak values don't necessarily equate to perceived volume, so a peak normalisation isn't the way to go if you want all your audio to start off at the same perceived loudness (but I believe SWS extensions can normalise to RMS or LUFS levels if you want that).
Judders is offline   Reply With Quote
Old 05-16-2018, 05:01 AM   #7
sjs94704
Human being with feelings
 
Join Date: Mar 2017
Location: Berkeley, CA USA
Posts: 1,336
Default Thanks to everyone for your input..It really helps!!

.........
__________________
Bayside Studios, Berkeley, CA - Music That Brings People Together
Steven Schuyler, Singer
sjs94704 is offline   Reply With Quote
Old 05-16-2018, 05:18 AM   #8
uksnowy
Human being with feelings
 
uksnowy's Avatar
 
Join Date: Feb 2008
Location: 6950 DK
Posts: 659
Default

Quote:
Originally Posted by Judders View Post
Normalizing doesn't change the dynamics at all. It just applies a gain change so that the highest peak is set to a given value. No expression is lost.
Agreed if you normalise the entire vocal track... but not if you normalise lines or verse, chorus segments. Trust me I have fallen into that trap..
__________________
REAPING HAVOC SINCE 2008
uksnowy is offline   Reply With Quote
Old 05-16-2018, 05:26 AM   #9
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by uksnowy View Post
Agreed if you normalise the entire vocal track... but not if you normalise lines or verse, chorus segments. Trust me I have fallen into that trap..
Ah, okay, that makes sense.
Judders is offline   Reply With Quote
Old 05-16-2018, 05:32 AM   #10
Eliseat
Human being with feelings
 
Eliseat's Avatar
 
Join Date: Mar 2018
Location: Cologne
Posts: 1,362
Default

Quote:
Originally Posted by Judders View Post
Normalizing doesn't change the dynamics at all. It just applies a gain change so that the highest peak is set to a given value. No expression is lost.

I think Valle summed it up really; if you need to normalise, do it, if you don't then don't bother. I know that sounds a bit obtuse, but if you don't find yourself thinking "this would be so much easier if these audio files had the same peak level", then don't worry about it. Back in the days of fixed-point DAW internal processing, you could run into problems by normalising, but these days it just doesn't matter one way or another.

As for seeing waveforms more clearly, you can do that non-destructively by using the zoom waveform action (shift + up arrow is default, I believe).

It's also worth remembering that peak values don't necessarily equate to perceived volume, so a peak normalisation isn't the way to go if you want all your audio to start off at the same perceived loudness (but I believe SWS extensions can normalise to RMS or LUFS levels if you want that).
Of course it matters. Sorry, it is no argument to say, everything is fine because nowadays DAWs have no fixed point processing anymore. I listed some arguments. And it would absolutely make no sense to normalize an item to 0db peak level if you need do level it down afterward to avoid hot gain or clipping. And even worse if you suggest it only for one track.
Normalizing to a certain -x db value is of course an good way to start. But this also makes no sense if it only gets applied to one track. (Vocal track) Than everything hits the master track to loud and you have to crank up the slider to get it even. No, that would be a bad decision.

sjs94704, normalizing can help you to set many media items to a good even start level for mixing or working. As I posted above, there are good pro arguments. Plus there are many plugins with input depended qualities which sound just bad with a to high gain. Plus its more easy to lift a master bus level thru a maximizer than trying to keep it calm.

Valle, you don't have to follow the -18db rule exactly, but -3db or -5db - like you said - is in my opinion useless. Why would you do that? Its nearly impossible to work with more then 20 tracks without cranking the sliders down like crazy. I don't get it.
Eliseat is offline   Reply With Quote
Old 05-16-2018, 05:39 AM   #11
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by Eliseat View Post
Of course it matters. Sorry, it is no argument to say, everything is fine because nowadays DAWs have no fixed point processing anymore. I listed some arguments. And it would absolutely make no sense to normalize an item to 0db peak level if you need do level it down afterward to avoid hot gain or clipping. And even worse if you suggest it only for one track.
Normalizing to a certain -x db value is of course an good way to start. But this also makes no sense if it only gets applied to one track. (Vocal track) Than everything hits the master track to loud and you have to crank up the slider to get it even. No, that would be a bad decision.

sjs94704, normalizing can help you to set many media items to a good even start level for mixing or working. As I posted above, there are good pro arguments. Plus there are many plugins with input depended qualities which sound just bad with a to high gain. Plus its more easy to lift a master bus level thru a maximizer than trying to keep it calm.

Valle, you don't have to follow the -18db rule exactly, but -3db or -5db - like you said - is in my opinion useless. Why would you do that? Its nearly impossible to work with more then 20 tracks without cranking the sliders down like crazy. I don't get it.
I see your arguments as non-issues. If you want to keep your faders at, or near, unity so that you get maximum fader resolution, then you can adjust the item volume. The idea that everything having the same peak value equates to having to move faders less doesn't make sense to me. When I mix, peak levels between tracks vary by huge margins.
Judders is offline   Reply With Quote
Old 05-16-2018, 06:26 AM   #12
Eliseat
Human being with feelings
 
Eliseat's Avatar
 
Join Date: Mar 2018
Location: Cologne
Posts: 1,362
Default

Yes, its makes no difference if you level the items down per normalizing or per item level. You can do both to get a rough low gain mix. But you shouldn't use normalization to maximize the peak to a 0db level because it really makes no sense to crank it up if you then have to level it even more down.

You just picked ONE argument out without reading the context. No, its not all about don't touching the sliders. Its about making your live easier and less complicated. Maybe you have time and fun moving every slider a hundred times but people who work a lot and know how it can end up try to avoid it by starting with a low gain mix. The minus 18dp thing is not a stupid rule from a bored nerd on youtube, its a rule many students world wide learn as a guide value.

And if you say there is no issue because you can still move the item value while mixing, that really makes no sense. Mixing should be the last step before mastering. If you level the media items just before the final mix everything gets crazy because the input levels of your FX also change. That's not mixing! That creates only a big mess. Just imagine input levels of compressors, amps or saturation plugins. No! Don't do that. Just start with a low gain mix, put it roughly together before the sliders and work from bottom to top, from tracks to buses to the master.

That's just my suggestion. Nobody has to follow it. But if not, I will spank your ...
Eliseat is offline   Reply With Quote
Old 05-16-2018, 06:47 AM   #13
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by Eliseat View Post
Yes, its makes no difference if you level the items down per normalizing or per item level. You can do both to get a rough low gain mix. But you shouldn't use normalization to maximize the peak to a 0db level because it really makes no sense to crank it up if you then have to level it even more down.

You just picked ONE argument out without reading the context. No, its not all about don't touching the sliders. Its about making your live easier and less complicated. Maybe you have time and fun moving every slider a hundred times but people who work a lot and know how it can end up try to avoid it by starting with a low gain mix. The minus 18dp thing is not a stupid rule from a bored nerd on youtube, its a rule many students world wide learn as a guide value.

And if you say there is no issue because you can still move the item value while mixing, that really makes no sense. Mixing should be the last step before mastering. If you level the media items just before the final mix everything gets crazy because the input levels of your FX also change. That's not mixing! That creates only a big mess. Just imagine input levels of compressors, amps or saturation plugins. No! Don't do that. Just start with a low gain mix, put it roughly together before the sliders and work from bottom to top, from tracks to buses to the master.

That's just my suggestion. Nobody has to follow it. But if not, I will spank your ...
If it works for you that's fine, but your arguments for it don't make sense. If you want all your audio to hit -18dB RMS, then peak normalisation will not do that for you. You should be using RMS normalisation for that.

Talking about upstream gain changes affecting downstream level-dependent processing is a red herring. The same is true for normalisation - if you normalise audio files upstream of level-dependent processing you will have the exact same problem.

I agree that good gain staging is worth doing, for meter and fader resolution mainly, but I don't get how peak normalisation helps in that regard. If, for example, my shaker now has the same peak level as my guitar solo, how has that saved me any time when it comes to mixing them?
Judders is offline   Reply With Quote
Old 05-16-2018, 07:14 AM   #14
Eliseat
Human being with feelings
 
Eliseat's Avatar
 
Join Date: Mar 2018
Location: Cologne
Posts: 1,362
Default

Starting point! I was just talking about a good starting point.

And I don't use normalization as I pointed out. I do my rough mix on the fly by starting in a low db area and adjusting the levels where ever I can (item volume, plugin level, compressor output etc.) without leveling every track exactly to -18db. I do it this way because it saves time. But starting in a low db area is important if you don't want to end up in a way to loud mix.

It also depends on what music you produce. How many tracks etc. And of course its only a suggestion. I would never say my way is the holy grail. I even make demos without gain staging just for laziness. But in the moment i seriously start to work it is the first thing i do.

And my last suggestion is to stop the discussion about philosophies in this thread and let others comment OPs question.
Eliseat is offline   Reply With Quote
Old 05-16-2018, 07:23 AM   #15
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,260
Default

Quote:
The minus 18dp thing is not a stupid rule from a bored nerd on youtube, its a rule many students world wide learn as a guide value.
It isn't a bad guide but from a truly technical reasoning, -18 dbFS RMS has nothing to do with ITB, it's the result of the card manufacturer being aware analog signals can go above analog zero but in digital we can't go above 0dBFS, the answer? "Push analog zero down on the digital scale" so that analog zero is roughly equal to some -dBFS value during recording so that the ~max analog value over zero has room to exist ITB without clipping - that usually falls between -20 dBFS and -12 dBFS RMS depending on the sound card and it's bit rate. However...

There really is no reason to peak normalize to 0 dBFS unless it is some "final" version of the item/track that will receive zero further processing - so I agree with you on that basic premise. The best thing to do these days IMHO, is to just normalize to something like -18 to -12 LUFS (give or take), this leaves enough headroom for processing the track itself and for the sum of all tracks to not push the master over zero as easily. It can be worthwhile to peak normalize to zero, select all the items, then back that off a few dB but I prefer normalizing to LUFS because it takes how we perceive loudness into account. In all cases it is driven by need on a case-by-case basis to some extent.

The bigger point is if we are recording audio and using the nominal levels of the hardware (preamp etc.) aka proper recording levels, then all of this is automatically fixes itself because it will by default be in that -18 to -12 dbFS RMS range.
__________________
Music is what feelings sound like.

Last edited by karbomusic; 05-16-2018 at 07:29 AM.
karbomusic is offline   Reply With Quote
Old 05-16-2018, 07:28 AM   #16
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by Eliseat View Post
And I don't use normalization as I pointed out.
Then why are you arguing for it?

Quote:
Originally Posted by Eliseat View Post
I do my rough mix on the fly by starting in a low db area and adjusting the levels where ever I can (item volume, plugin level, compressor output etc.) without leveling every track exactly to -18db.
That is pretty much exactly how I work too, so I'm not sure what we're arguing about.
Judders is offline   Reply With Quote
Old 05-16-2018, 07:37 AM   #17
Eliseat
Human being with feelings
 
Eliseat's Avatar
 
Join Date: Mar 2018
Location: Cologne
Posts: 1,362
Default

Just for fun of course.
Eliseat is offline   Reply With Quote
Old 05-16-2018, 07:56 AM   #18
beatsbooster
Human being with feelings
 
Join Date: Apr 2018
Posts: 1
Default

Well if you are talking about 0 dB normalizing i would say no no no.... I would normalize to make a bunch of tracks at the same peak level ( not 0dB for sure ) normalizing just one vocal track would not make sense, on the other hand normalizing a lot of vocal tracks and choruses to a good starting point, -18 or -12! it doesn't matter, would make much more sense. It's true that if you gain stage properly you would't need to normalize, but sometimes you have to record a single part on different days because of time and stuff... maybe you can recall the gain in the interface, but if you have an amp involved or external gear things get complicated so then it would make sense to normalize a group of tracks.

Last edited by beatsbooster; 05-16-2018 at 08:06 AM.
beatsbooster is offline   Reply With Quote
Old 05-16-2018, 10:41 PM   #19
vdubreeze
Human being with feelings
 
vdubreeze's Avatar
 
Join Date: Jul 2011
Location: Brooklyn
Posts: 2,613
Default

The question has to be weighed against the reason for doing it. Context counts. Personally, I don't see what is achieved in tracks in a music mix. If it needs to be gained up and I want to leave the fader where it is I usually have at least one if not several places to do it in eq or compressor level already on the track (since the topic is vocals and these or something similar are always on it) If something was recorded very low, then normalizing it to -18 rms or -8 peak so that it hits the track's fx easier, sure, maybe. If multiple vocals aren't level matched they can be matched by normalizing as well, but that's just one of those things that doesn't match the levels it just makes them closer, so it becomes a two step adjustment to matching levels rather than one

I personally don't create new files with higher levels in a music mix. It's not necessary and the benefit is visual or UI-wise, not for the sake of solving an audio issue, so no. Voice overs and narration occasionally if I'm dealing with performances from differing sources and want to make them in the same ballpark. But since they're on their own tracks anyway there's usually no actual reason to do it.

It won't cause a problem but there's no reason to do it, as I see it, except in a level housekeeping way.
__________________
The reason rain dances work is because they don't stop dancing until it rains.
vdubreeze is offline   Reply With Quote
Old 05-18-2018, 12:41 PM   #20
Valle
Human being with feelings
 
Valle's Avatar
 
Join Date: Nov 2007
Location: Sweden
Posts: 808
Default

Quote:
Originally Posted by Eliseat View Post
Valle, you don't have to follow the -18db rule exactly, but -3db or -5db - like you said - is in my opinion useless. Why would you do that? Its nearly impossible to work with more then 20 tracks without cranking the sliders down like crazy. I don't get it.
Because, like I said: "For one, I can better visually follow the action in the audio, and plugins on similar sources will process more consistently." And then, for the record, I never said I ALWAYS do it. I often do it, though, but sometimes I leave it as is. It depends on the source/recording.

However, you may find it impossible and therefore useless. I don't. Nor do I end up "cranking the sliders down like crazy". Maybe your volume fader low-range is set too high (mine is @ -80dB - ish) ...

EDIT: Just to be clear on my part – when I’m talking about normalizing I’m referring to non-destructive normalizing. I almost never edit audio destructively.
__________________
Valenzia Vision

Last edited by Valle; 05-18-2018 at 11:55 PM. Reason: Clarification
Valle is offline   Reply With Quote
Old 05-18-2018, 01:35 PM   #21
Tod
Human being with feelings
 
Tod's Avatar
 
Join Date: Jan 2010
Location: Kalispell
Posts: 14,745
Default

I stopped normalizing a long time ago, and when Reaper got the great "Item Gain" control that it has now, it really became a mute point.

Of course that's not exactly always the case, there are a multitude of scenarios that come up during the course of recording and mixing a project. However, I will use the item gain control almost 99% of the time, no way to paint yourself into a corner that way.
Tod is offline   Reply With Quote
Old 05-18-2018, 09:43 PM   #22
ChristopherT
Human being with feelings
 
Join Date: Apr 2017
Location: South
Posts: 587
Default

I have never seen any professional audio engineer ever use normalize - in any circumstance.

Just sayin
ChristopherT is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 10:43 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.