Go Back   Cockos Incorporated Forums > REAPER Forums > REAPER General Discussion Forum

Reply
 
Thread Tools Display Modes
Old 12-08-2018, 09:47 AM   #1
JohnnyMusic
Human being with feelings
 
JohnnyMusic's Avatar
 
Join Date: Sep 2014
Location: Twin Cities, Mn
Posts: 384
Default Mastering and Rendering Process, separate passes or one go? And when to Dither??

Hello,
-My question is regarding Basic Best Practice for how to maintain the best possible sound quality during rendering a mix for final distribution:
-I have read on this forum and in other places that one should render a mix to the same bit depth, and sample rate the project is in, say 24 bit, 96khz, 32 bit floating point if possible.
-My question is after that, once I set up some mastering processes and am preparing to render it to 16 bit/44 or mp3, should I render the mastering processes as a separate step or apply the mastering processes and conversion to final format in the same go?

-Further, should I apply dithering at any of these steps and if so, which ones? -Again the goal is highest sound quality, (not concerns about speed or computing power etc.).
Thanks for any input on this!
John
JohnnyMusic is offline   Reply With Quote
Old 12-08-2018, 10:30 AM   #2
bladerunner
Human being with feelings
 
bladerunner's Avatar
 
Join Date: Sep 2007
Location: Kent, UK
Posts: 4,846
Default

My advice - even as a mastering engineer - is to try both methods (doing the 'mastering processes as a separate step or apply the mastering processes and conversion to final format in the same go') - do null tests with the results. Let your own experiences inform you. I've done plenty of testing myself - and, believe me, there's a lot of bs out there on forums about 'quality'
bladerunner is offline   Reply With Quote
Old 12-08-2018, 11:40 AM   #3
JohnnyMusic
Human being with feelings
 
JohnnyMusic's Avatar
 
Join Date: Sep 2014
Location: Twin Cities, Mn
Posts: 384
Default

Thanks for your input blade runner.
-Obviously just trying the different methods and seeing if if I can hear a difference is common sense.
-I was more getting at whether there is a generally accepted way of doing this, and it sounds like you are saying there isn't so I'll just try different methods and see what gives the best result.
-I haven't tried to run my own null tests.
So, you're saying rendering the song using each method then null test to see how different they really are?
If they don't null, that doesn't really tell me which is better does it? What does that tell me other than the methods are not identical?
Thanks!
John
JohnnyMusic is offline   Reply With Quote
Old 12-08-2018, 12:55 PM   #4
Bri1
Banned
 
Join Date: Dec 2016
Location: England
Posts: 2,432
Default

Quote:
-My question is after that, once I set up some mastering processes and am preparing to render it to 16 bit/44 or mp3, should I render the mastering processes as a separate step or apply the mastering processes and conversion to final format in the same go?

ello- you can just do it all in 1 go,should be fine as long your playback buffer is steady,no crackles etc.
more samples+bits per second=higher q. but the thing is.."how many media players playback above rates of 96khz/32bit?" - the higher you go- less+less playback devices will support higher rates,some website browsers will refuse to playback..
44.1khz/16bit was just a standard for cd- there are no actual standards for playback+file formatting as far i know.
it's exactely the same for midi/interfacings+lufs-- there are only recommendations-no actual standard for all to follow the same.


the whole point for lufs was that when adverts came on between movies/or radio broadcasts--they would be the same levels as movie/broadcast--- not many follow these recommends in practice still today-competition for loudness is still a thing like that.
oh,add dither @ finaled render time-> funnily enough,reaper is 1 of the rarer programmes that have options to dither+noise shape 24bit wav files!! =weird,right?
Bri1 is offline   Reply With Quote
Old 12-08-2018, 01:23 PM   #5
ashcat_lt
Human being with feelings
 
Join Date: Dec 2012
Posts: 7,272
Default

Every render except for the very last one should always be to floating point. Then you don't need dither. When you render the final fixed point distribution file, you choose the bit depth to match the medium and engage dither.

Whether or not there is a render that is not the final distribution file is kind of up to you. When I'm doing a one-off thing I very often just stick all may mastering crap on the mix project, tweak til I have the DR I want and peaks hitting my chosen target (usually -0.6dbFS), I just render to 24bit and/or 16bit and/or mp3 (all with dither engaged) right from there.

If I'm doing a collection like an "album" out of separate project files, it's more convenient to render each piece separately at 32 or 64bit fp and bring them all into a mastering session where I can lay them out, make individual tweaks, and add master buss stuff to help them all sound like they belong together and flow nicely and such. Then I make regions and render those to my chosen output format with dither.

One important point is that I don't (anymore) render to one fixed point format and then convert that to another. Like I said, if I want 24 and 16bit versions and also an MP3, I render them each separately from the (floating point) project file itself. You have to be careful about that if there's a lot of active randomness in your project, but sometimes that's fun too. Like an Easter egg for nerds who care.
ashcat_lt is online now   Reply With Quote
Old 12-08-2018, 01:35 PM   #6
Bri1
Banned
 
Join Date: Dec 2016
Location: England
Posts: 2,432
Default

tbh-it's more about samples than bit depth for final renders... 16bit with dither can give back around 100db of signal,which is just about good enough for most people.the more samples you give a signal-the better is can form a continous wave..transient peaks will get smoother actually.. as the rates increase.

problem with higher sample rates comes at recording/mixing stage..computers just cannot keep up with the amounts of information--if there are many things happening per second.
see how many amounts of tracks you can mix@ 44.1khz-- then 96khz-- your computer speaks louder than any words.
1track playing back @96khz should prove no problem even to a slow system.
Bri1 is offline   Reply With Quote
Old 12-08-2018, 01:41 PM   #7
ashcat_lt
Human being with feelings
 
Join Date: Dec 2012
Posts: 7,272
Default

I don't actually want to argue the "merits" of higher sample rates. It actually is factual science. I know where I sit.



What I am willing to say is that sample rate conversion always causes damage, so you should try to only do that once, and probably as late in the process as possible so that your intermediate processing doesn't make the artifacts worse.
ashcat_lt is online now   Reply With Quote
Old 12-08-2018, 01:49 PM   #8
Bri1
Banned
 
Join Date: Dec 2016
Location: England
Posts: 2,432
Default

Quote:
I don't actually want to argue the "merits" of higher sample rates. It actually is factual science.

yep-there's 0point arguing with oneself eh.
JohnnyMusic is simply gathering info from others-they can take it or leave it like we all can.either thankfully,or,not. =)
Bri1 is offline   Reply With Quote
Old 12-08-2018, 02:08 PM   #9
Tod
Human being with feelings
 
Tod's Avatar
 
Join Date: Jan 2010
Location: Kalispell
Posts: 14,745
Default

Quote:
Originally Posted by ashcat_lt View Post
I don't actually want to argue the "merits" of higher sample rates. It actually is factual science. I know where I sit.
Ha ha, where do you sit ashcat, personally I've been using 44.1/24bit for years and never had a problem?
Tod is offline   Reply With Quote
Old 12-08-2018, 02:15 PM   #10
ashcat_lt
Human being with feelings
 
Join Date: Dec 2012
Posts: 7,272
Default

Quote:
Originally Posted by Tod View Post
Ha ha, where do you sit ashcat, personally I've been using 44.1/24bit for years and never had a problem?
Yep.
ashcat_lt is online now   Reply With Quote
Old 12-08-2018, 11:00 PM   #11
JohnnyMusic
Human being with feelings
 
JohnnyMusic's Avatar
 
Join Date: Sep 2014
Location: Twin Cities, Mn
Posts: 384
Default

Thanks much for sharing everyone, lots of helpful info.

-Regarding sample rates, I read somewhere that If I could mix at 96 it would sound better, or maintain more detail, or something. So I just do it because my computer can handle it with the projects that I am doing. Probably 40-50 tracks and a fairly large buffer, anticipative effects processing on, monitor through hardware.
-Edit: (I just read that David Lavry, a converter designer, says that too high a sample rate isn't necessary, and can even cause problems/distortion because there is too much data to process it fast enough or something like that. However the exact sample rate at which that occurs is not exactly known. I am now much more likely to go down to a lower sample rate the next time I start a new project. For now though, I am right in the middle of a project of 11 songs and I don't think it would be good to start mixing different sample rates, so I will stay where I am with these already started projects).
-I did also read that I should keep it at the higher SR and bit depth as long as possible, ie, until the final render, if possible. Is that true for both sample rate and bit depth?
If my projects get larger and start to bog down the computer I'll try 44 for sure.
Thanks again, everyone. Helps a lot.
John

Last edited by JohnnyMusic; 12-08-2018 at 11:22 PM.
JohnnyMusic is offline   Reply With Quote
Old 01-17-2019, 04:49 AM   #12
Pablo Sound
Human being with feelings
 
Join Date: Jan 2019
Location: Spain
Posts: 62
Default

Quote:
Originally Posted by JohnnyMusic View Post
-I did also read that I should keep it at the higher SR and bit depth as long as possible, ie, until the final render, if possible. Is that true for both sample rate and bit depth?
For bit depth, it makes no sense working at 16-bit for recording/mixing, even if is a project that will end in a CD, because you will be working with less audio quality for the entire process. The deal is working at 24-bit (or even better, 32-bit FP) and if at the end of the chain you need to render the session/tracks for a CD, then render at 16-bit.
Pablo Sound is offline   Reply With Quote
Old 01-17-2019, 07:23 AM   #13
MRMJP
Human being with feelings
 
Join Date: May 2016
Posts: 2,065
Default

+1 for maintaining the native sample rate and floating point bit-depth until after all the processing is done (aside from dither of course).

Also +1 for when mastering an EP and album for rendering the project in one full pass at the native sample rate and floating point bit-depth to print the plugin processing. My main reason for this is that when you have tracks with overlapping or continuous audio, you can avoid any clicks or glitches between the tracks if all the plugin processing (other than dither) is done in one pass.

This is true in WaveLab and I've helped a few REAPER users with this issue as well. When plugins have to start and stop for each track render and the audio is overlapping or connected it's very likely you'll get a glitch at the track transitions if you go track by track and then line up the master files back to back as they will eventually be for the digital release.

WaveLab makes it easy to render your source project to a new project in a single pass and from this point you can add the required dither and render track by track without issue, or to DDP etc.

There is no single best way to work but generally speaking, I prefer to upsample to 96k (using a mastering grade sample rate converter) if things come in at 48k or 44.1k because I find that the processing of plugins and my analog equipment sounds best at x2 sample rates. Anything that comes in at 88.2k or higher gets processed at the native sample rate and then downsampled as needed after all the processing (besides dither of course) is done.

WaveLab makes this easy, I haven't tried in REAPER because of a general lack of other mastering focused features so I really only use REAPER for the initial processing using plugins before the analog gear, capturing to a new track, and doing some spot editing as needed with RX as REAPER's external editor before trimming up the captures from analog and finalizing in WaveLab.
__________________
REAPER, just script it bro.
MRMJP is offline   Reply With Quote
Old 01-17-2019, 03:34 PM   #14
DVDdoug
Human being with feelings
 
Join Date: Jul 2010
Location: Silicon Valley, CA
Posts: 2,779
Default

Quote:
Mastering and Rendering Process, separate passes or one go?
When mixing, the mixed levels are unpredictable. For that reason, I'd say its best to render to floating point (which can go over 0dB without clipping) or to 24-bit with plenty of headroom. Then, normalize (or otherwise adjust the levels) as a separate mastering step.

But if you are using a master-track limiter as part of your mixing process, that may not be necessary.



One thing many people don't realize about bit-depth - Mixing increases resolution. Mixing is done by summation so if you mix two (full volume) 16-bit tracks you've got 17 bits worth of data/resolution!
DVDdoug is offline   Reply With Quote
Old 01-17-2019, 07:03 PM   #15
JohnnyMusic
Human being with feelings
 
JohnnyMusic's Avatar
 
Join Date: Sep 2014
Location: Twin Cities, Mn
Posts: 384
Default A few follow up questions

MRMJP:

1) Regarding the rendering an entire album in one pass for songs that overlap to prevent glitches from plugins, are you referring to plugins on the master bus?

2) I have a question about a waves plugin, the multiband limiter L3-16.
It has a "quantize" menu that allows selecting to quantize up to 24 bit, and no option to turn it off. What does quantize mean and does this defeat the benefit/purpose of keeping the processing at 32 bit floating point within reaper?
Thanks!
JohnnyMusic is offline   Reply With Quote
Old 01-17-2019, 08:28 PM   #16
vdubreeze
Human being with feelings
 
vdubreeze's Avatar
 
Join Date: Jul 2011
Location: Brooklyn
Posts: 2,613
Default

Quote:
Originally Posted by JohnnyMusic View Post
MRMJP:

2) I have a question about a waves plugin, the multiband limiter L3-16.
It has a "quantize" menu that allows selecting to quantize up to 24 bit, and no option to turn it off. What does quantize mean and does this defeat the benefit/purpose of keeping the processing at 32 bit floating point within reaper?
Thanks!
Just going from memory, but I believe that it is the case that the L3 doesn't output higher than 24 bits (and that the quantizing is always on, at whatever setting the menu is) even as the DAW is floating point higher. But I don't think this is an issue, as it's outputting the files vs internal processing. It doesn't alter what processing is going on before the L3, and there isn't going to be higher than 24 bits of audio coming out the exit creating a new file anyway. Although the way Waves puts it it seems something unfortunate will be happening to the audio, but it's the way it would be with anything outputting the audio.

Umm, right??
__________________
The reason rain dances work is because they don't stop dancing until it rains.
vdubreeze is offline   Reply With Quote
Old 01-17-2019, 08:44 PM   #17
MRMJP
Human being with feelings
 
Join Date: May 2016
Posts: 2,065
Default

Quote:
Originally Posted by JohnnyMusic View Post
MRMJP:

1) Regarding the rendering an entire album in one pass for songs that overlap to prevent glitches from plugins, are you referring to plugins on the master bus?

2) I have a question about a waves plugin, the multiband limiter L3-16.
It has a "quantize" menu that allows selecting to quantize up to 24 bit, and no option to turn it off. What does quantize mean and does this defeat the benefit/purpose of keeping the processing at 32 bit floating point within reaper?
Thanks!
1) Both really. Awhile ago I was mastering a record with a few overlapping songs and the DDP renders were OK but then when I rendered track by track from the same project, there was a little tick at the track transitions when the WAVs were lined up again back to back which would be a problem for the digital release.

Then it occurred to me that a DDP render is technically one full render pass and that is what helped avoid the glitches so I developed a workflow that does this on the front end so that it's never a problem for any of the rendered files and master formats. It also gives me an easy way to double check the plugin processing in general. Sometimes what you are hearing on playback is not what gets rendered due to bugs etc.

The only thing on my master bus when I finally render track by track master WAV files is Goodhertz Good Dither.

A few REAPER users on this forum (and FB groups) have had a similar issue and when I suggested this workflow/workaround, it solved the problem for them.

2) I have the Waves L3-16 but I've never really used it so I'm not familiar with it. The manual says:

"The Quantize control sets the target bit depth of the L3-16 output. Quantize is always active, so the output of the L3-16 will be quantized to a maximum of 24-bits even if you are in a floating point environment."

For this reason alone I'd probably not use it in a mastering context but if it works for you, that's great.

If you're unsure about the actual bit-depth of your audio stream, you can use the free Stillwell Bitter plugin:

https://www.stillwellaudio.com/plugins/bitter/

Or, Goodhertz Good Dither is not only a great dithering plugin, but also has a bit-depth meter:

https://goodhertz.co/good-dither

19 Bucks.
__________________
REAPER, just script it bro.
MRMJP is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 01:31 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.