Hello everyone, hope you're all staying healthy right now. I just wanted to share a recent experiment/experience and perhaps get any feedback if this was the right thing to do?
When I started learning about my studio, I was "told" to record at 96kHz, so I have been doing that for years since. I did research a bit before, and nothing ever steered me away from it, at the same time I knew I needed to have a lot of processing power and a lot of drive space to suit that workflow. At the same time I never really considered the plugin performance as much at first, and topics about aliasing seemed to not be an issue at that sample rate.
So, last week between projects I thought I'd try taking the sample rate down to see if there was any difference in sound, or performance. Here's what I discovered:
1. Reaper resamples everything appropriately (so it seems), old projects/new projects, everything. Thank goodness!
2. I cannot hear the difference at all.
3. The performance gain is TREMENDOUS
Because I get #3 with #2, I am very very pleased with the results. If I'm careful with filters and use oversampling appropriately, no glaring foldback and also no major performance hit.
The other thing I tried was enabling "Anticipative FX" processing. Previously this would cause major havoc during recording, but now everything is super smooth and no major performance hit. I'm able to run a recording session, then a mix/master session without changing any buffer settings or any preferences....and still get better performance overall.
So all in all this one change (well, 2 changes including anticipative fx) have been incredible for me. Wish I'd have done this sooner. Did I do the right thing or will I miss 96k later?
I dropped 96k to 48k years back because I could never tell without looking at what the sample rate was. There are some exceptions with some synth sounds etc. where aliasing can rear it's head but the only examples I've heard personally were extreme, valid but extreme.
That doesn't mean there are not synths that sound better, but I don't use them enough to know, and it's kind of irrelevant unless someone can blind test themselves properly to be sure - or better said, I'm on board if someone thinks 96k sounds better but if they don't take the time to ABX it in 'some' form, the are likely wasting a boat load of CPU and disk space for little or no return.
__________________ Music is what feelings sound like.
The "pro studio standard" seems to be 24/96,* but the guys who do blind ABX tests have pretty-well demonstrated that there's no audible difference between a "high resolution" original and a copy downsampled to "CD quality".
* The "24-bits" is only for the ADCs & DACs (and maybe the master-render)with processing done in floating point. (As you may know, REAPER uses 64-bit floating point.) There are technical advantages of doing DSP, editing, and mixing in floating point. And if you are mixing, mixing is done by summation so you can actually increase your true bit-depth resolution beyond what was recorded.
The "pro studio standard" seems to be 24/96,* but the guys who do blind ABX tests have pretty-well demonstrated that there's no audible difference between a "high resolution" original and a copy downsampled to "CD quality".
Yes but to be devil's advocate, you can create some simple, yet kind of out there (still valid, it's music!) synth sounds and the difference due to aliasing is something no ABX test is required for, because anyone who has functioning ears will hear the difference - it's that reliable IF you use a sound that causes it - where I'm not sure about whether they should be doing that though, seems like it 'should' be a non-issue in a properly designed synth but I'm at the end of my expertise there.
I probably have a project here somewhere that demonstrates this but could have deleted.
__________________ Music is what feelings sound like.
The "pro studio standard" seems to be 24/96,* but the guys who do blind ABX tests have pretty-well demonstrated that there's no audible difference between a "high resolution" original and a copy downsampled to "CD quality".
* The "24-bits" is only for the ADCs & DACs (and maybe the master-render)with processing done in floating point. (As you may know, REAPER uses 64-bit floating point.) There are technical advantages of doing DSP, editing, and mixing in floating point. And if you are mixing, mixing is done by summation so you can actually increase your true bit-depth resolution beyond what was recorded.
You are recording at 64 bit floating point...? That would be a complete waste of hard drive space and I/O performance. Your mics, preamps and A/D converters won't deliver even 24 bit integer at full fidelity.
__________________
I am no longer part of the REAPER community. Please don't contact me with any REAPER-related issues.
You are recording at 64 bit floating point...? That would be a complete waste of hard drive space and I/O performance. Your mics, preamps and A/D converters won't deliver even 24 bit integer at full fidelity.
is there a difference? I mean, it comes into Reaper however it comes from the interface, but once it's inside Reaper its 64bit in terms of volume resolution, right? Now i'm confused!
If your converters can only deliver 24-bit resolution (and usually it's less than the maximum bit depth due to the converters having a noise floor somewhat higher than the theoretical bit depth, let alone the rest of the analog circuitry on the input that contributes a bit of noise, plus whatever noise you capture during recording), what's the point in even recording to 32-bit floating point. You'd only capture your 24-bit (realistically usually 20-bit at best with a noise floor) resolution. So if you were even capable of recording 64-bit floating point...that's just overkill. You'll only get the "realistically possible" resolution of your recording, which is much more limited. It won't become 64-bit floating point once it's in the DAW either. The noise floor keeps the file resolution where it is.
Mixing all your audio in the DAW at 64-bit floating point allows headroom to "do whatever" pretty much, in terms of dynamic range. That's where it makes sense.
Recording as 32-bit floating point apparently has some potential advantage over recording as 24-bit, from what I've read. But I forget what it is, since it doesn't apply to me. I vaguely recall it's possible 32-bit floating point WAV files can actually use less CPU to manipulate in a floating point DAW and/or plugin, but of course the file sizes will still be larger than 24-bit, so there's that tradeoff.
Last edited by JamesPeters; 03-16-2020 at 12:05 PM.
Working with all your audio in the DAW at 64-bit floating point allows headroom to "do whatever" pretty much, in terms of dynamic range. That's where it makes sense.
again, i'm confused here (apologies if it's supposed to be obvious), but if I'm recording at 24bit from the interface, doesn't it go into Reaper at 64bit, but you said that it's still 24 bit.....what?
I've edited the post to clarify. If your file is recorded with a noise floor significantly above 24-bit (which it most likely is anyway, no matter what resolution you choose in your interface), how can you "gain resolution" on that file? The resolution is inherently limited. In this case we're talking about the bit depth, which is limited by the noise floor (lowest signal level resolvable) and of course also by the clipping point (0 dB full scale digital). You'll only ever record maybe up to 120 dB of bit depth. That doesn't "increase" when you put it in the DAW. That would be magic. It would have to involve being able to somehow remove the noise floor and add information which was never there, obscured by the noise during recording.
Your files don't "become" 64-bit after recording them.
The DAW mixes within a 64-bit floating point system so that you can do things as crazy as cranking the gain 100 dB on the track. As long as the output of the master doesn't exceed 0 dB full scale (after lowering the volume elsewhere), it's all good.
again, i'm confused here (apologies if it's supposed to be obvious), but if I'm recording at 24bit from the interface, doesn't it go into Reaper at 64bit, but you said that it's still 24 bit.....what?
There's two "64 bit" in Reaper, the internal processing of the audio and the glue/freeze setting. The former is how the audio stream/processing is handled internally (IIRC), the latter is the file when gluing and so on.
For gluing etc. if 32 bit or 64 bit float, so long as the audio is less than about 1000 > 0 dbFS, you can just turn it back down and all will be fine, which is great. Which means you can never render/freeze/glue and clip the audio in the actual file it is going to.
How the audio is captured through the converter is a different thing.
__________________ Music is what feelings sound like.
Your files don't "become" 64-bit after recording them.
The DAW mixes within a 64-bit floating point system so that you can do things as crazy as cranking the gain 100 dB on the track. As long as the output of the master doesn't exceed 0 dB full scale (after lowering the volume elsewhere), it's all good.
Thanks, I almost got it. I get that there's no "magic", but I'm still confused how it can't be 64 bit but the DAW mixes at 64bit, the latter is I think the important thing (so I can go crazy with volume lol)
There's two "64 bit" in Reaper, the internal processing of the audio and the glue/freeze setting. The former is how the audio stream/processing is handled internally (IIRC), the latter is the file when gluing and so on.
For gluing etc. if 32 bit or 64 bit float, so long as the audio is less than about 1000 > 0 dbFS, you can just turn it back down and all will be fine, which is great. Which means you can never render/freeze/glue and clip the audio in the actual file it is going to.
How the audio is captured through the converter is a different thing.
ahhhhhh ok now that makes more sense if there's separation, thank you.
Thanks, I almost got it. I get that there's no "magic", but I'm still confused how it can't be 64 bit but the DAW mixes at 64bit, the latter is I think the important thing (so I can go crazy with volume lol)
As far as 64 bit "processing/audio engine".... It's a number of decimal places available/precision thing during processing. If you move a volume fader just a little, it's going to generate a lot of decimal points, if you process @64 bits of precision you only have to truncate (round) at render time instead of every time the signal is changed so that the rounding errors don't add up to anything significant.
I'm guessing a lot so someone else can clear it up better than I most likely.
__________________ Music is what feelings sound like.
Could somebody explain what's the difference between regular Oversampling, and FIR Processing as found in DMG's Equilibrium?
Is this process trying to emulate some obscure analog transformer effects,
or is it closer to regular Oversampling?
Whats the difference between say using a Blackmann-Harris or a Kaiser window?
I can surely hear a difference when swapping from lower to higher Impulse Lenghts/Padding (say 4096 to 65536..),
it sounds cleaner/more precise so I guess the more the better..
But what's it actually trying to do, and why most other EQ's/plugs don't go this FIR route?
.
The shootout testing I did over 10 years ago:
Recording a vinyl album and having the recording sound just exactly the same was my test. I figured this was equal parts a genuine hi-fi source (audiophile pressing and my vinyl setup is real), already introduced analog generation loss (to give it that edge case cumulative error kind of thing), and just delicate fiddly analog signal handling in general.
I has an Apogee PSX-100SE at the time. (The SE became standard and the Rosetta has the same analog stages.)
At 48k I thought there was a little something something not quite the same.
At 96k it was identical to me in every way shape or form and I decided we were finally done here and in the golden age of audio.
But then I've reduced 96k program I've recorded or created myself to 48k or even 44.1k and I can't hear any difference. And I can convert it back to 96k and it will null with the original way down into the decimal dust. You have to turn the volume up to lethal levels to hear anything left.
So maybe edge cases can still be a thing. I set it to 96k when convenient but usually stay with 48k for anything live. Seems like the thing to do with an overpowered computer for audio.
My MOTU converters had a bigger more obvious difference between 96k and 48k. But there was an even bigger difference in the Apogee at ANY sample rate vs the MOTU at 96k. The Apogee analog front and back ends make for noticeably better sound no matter what you do.
Being purist with avoiding conversions and all that is the thing to do but at the same time there's a bulletproof quality to this that lets you get away with a lot for not much damage.
There should be nothing to fuss over missing with 48k. And even if you had an edge case, you'd still be sleeping fine with the end result knowing that I think.
PS. Those shrill lo-fi sounding CDs you hear were mastered with outrageous treble eq boosts and limiting. 16 bit doesn't do that. Someone did that intentionally. 16 bit may do some damage or have some limits but it doesn't do that by itself! You won't be hearing any difference in blind, deaf, and dumb tests with that kind of program either. (And HD formats certainly don't save such masters!) Find a symphony with some delicate depth of field sound or something if you want to go there.
DAN's video is mind opening ! so the intermodulations can seriously screw up things rather than make them better at 96khz.
But do higher sample rates help when it comes to time stretching and pitch shifting ?
DAN's video is mind opening ! so the intermodulations can seriously screw up things rather than make them better at 96khz.
But do higher sample rates help when it comes to time stretching and pitch shifting ?
zook
I do a lot of high frequency recording for FX and yes it is worth doing - for example some insects and bats have to be recorded at higher rates eg 192kHz sample rates or even 384kHz. Obviously you need mics that can record those frequencies. Here is a good demo of pitching down and mics https://www.youtube.com/watch?v=e093pWoWCBs
But for usual mixing and recording etc I have used 48kHz for quite a while as most of my commissioned work is for video where 48kHz is the standard.
I once added a comment in the FR forum to a request to have Reaper provide a per-FX oversampling option, that it would be even better do automatically use the higher sample rate for groups of FXes (to save CPU).
The video shows that this is a bad idea. Nyquist filters should always be provided between any two FXes.
Did I do the right thing or will I miss 96k later?
You won't miss it unless you decide to track something while you're mixing, in which case you might notice that 48k has twice the latency as 96k. But there's no audible difference in sound quality between 48k and 96k (or between 44.1/16 and 384/32, for that matter).
But there's no audible difference in sound quality between 44.1/16 and 384/32.
i don't think you can claim this - though the majority of people probably wouldn't notice bearing in mind the typical listening scenarios & equipment used by the general public.
It's only really valid on transient rich material. If you had a recording of ambiental stuff that has no rhythm or anything to generate sharp transients, you wouldn't notice a difference I'm pretty sure.
It's only really valid on transient rich material.
Therefore, most of the music there is.. :P
Unless your only into Sinusoidal Ambient moods, ofc
-But yeah, it's true that it will be more apparent on certain Styles than others;
anything with very Rich/Full Mixes like proper Orchestral/Rock/Jazz/Acoustic music is usually gonna have all kind of instruments/timbres going on.. so they will benefit a lot by using 24bit.
Also the approach in Production will have an effect;
If all the elements on a Mix were Super Tighly Filtered, EQd, Leveled and Dynamically adjusted,
everything would be much more Tidy and Clean..
So a very good Mix/Production can reduce the distance, and make your 16bit CD sound superb,
almost as pleasing as a 24bit mix would be.
The perceivable (not absolute) difference/benefit between 24bit and 32bit is quite smaller tho.
Unless your only into Sinusoidal Ambient moods, ofc
-But yeah, it's true that it will be more apparent on certain Styles than others;
anything with very Rich/Full Mixes like proper Orchestral/Rock/Jazz/Acoustic music is usually gonna have all kind of instruments/timbres going on.. so they will benefit a lot by using 24bit.
Also the approach in Production will have an effect;
If all the elements on a Mix were Super Tighly Filtered, EQd, Leveled and Dynamically adjusted,
everything would be much more Tidy and Clean..
So a very good Mix/Production can reduce the distance, and make your 16bit CD sound superb,
almost as pleasing as a 24bit mix would be.
The difference/benefit between 24bit and 32bit is quite smaller tho.
Science and math prove that the only difference between bit depths in an audio recording is the noise floor, so if you think you're hearing more, your brain's pulling a fast one. This is not to say that there aren't headroom benefits to recording and mixing at higher bit depths, however. There are. But the accuracy of your sound waves is identical regardless of bit depth.
if you think you're hearing more, your brain's pulling a fast one. This is not to say that there aren't headroom benefits to recording and mixing at higher bit depths, however. There are. But the accuracy of your sound waves is identical regardless of bit depth.
Not "more" but let's say "better",
which in the end also translates into "more", as the listener is able to perceive the details more clearly..
About the Accuracy of the soundwave, yes they pretty much make that point in the Video..
But I'd say there's some limits to that beyond the base requirements of your SR;
and more considering real life instruments Timbres/music is not made of pure Sinusoidal waves, but the most complex array of harmonics.
¿Why is it then that you can clearly hear the difference between any 16bit mix, and the same mix downgraded to 8bit?
(regardless of the Dithering algo, which can help A LOT like the good Izotope stuff, but cannot do Magic in itself..)
There's clearly a lower limit of how much/precisely can those 8bits be spread to represent the soundwaves.
The more bits the truer the representation; likewise the more Complex and rich is the source, the more bits must be needed.
Science and math prove that the only difference between bit depths in an audio recording is the noise floor, so if you think you're hearing more, your brain's pulling a fast one. This is not to say that there aren't headroom benefits to recording and mixing at higher bit depths, however. There are. But the accuracy of your sound waves is identical regardless of bit depth.
Except that you can't pretend the full theoretical dynamic range of the full bit depth is fully usable. Anything with a low enough level to get below 8 bit resolution is getting distorted. If you want to compare this digital container to an analog system you really have to leave a subset of the lowest bits as your "noise floor". 24 bit is so bulletproof because you can call the lowest 8 bits your noise floor and STILL have 16 bits to fully use for the program. 96db of true dynamic range with minimum 16 bit resolution. If you aren't critically careful with your dynamic range in a 16 bit system you can end up with significant distortion. Properly frame the recording and sure, you might be none the wiser and genuinely do no damage. Let part of the program dip below 8 bit resolution and you've got digital generation loss. Very much NOT identical regardless of bit depth!
Recording at 48k vs 96k may reveal some slight shortcomings in your AD converters. If you have the monitors and room to be able to hear it! Not a show stopper in any way though IMHO. Reducing to 16 bit to save hard drive space will bite you in the ass soon enough. There's just no good reason to do that. I don't think the CD format is quite ready to retire yet unfortunately but it's just one outlier to make a 16 bit master for. Record at 24 bit no matter what sample rate you use and keep it 24 bit for everything but the old CD version.
Science and math prove that the only difference between bit depths in an audio recording is the noise floor, so if you think you're hearing more, your brain's pulling a fast one. This is not to say that there aren't headroom benefits to recording and mixing at higher bit depths, however. There are. But the accuracy of your sound waves is identical regardless of bit depth.
So what was that horrendous chattering hash I heard as cymbals decayed when I used to monitor through an undithered 16 bit interface?
I do a lot of high frequency recording for FX and yes it is worth doing - for example some insects and bats have to be recorded at higher rates eg 192kHz sample rates or even 384kHz. Obviously you need mics that can record those frequencies. Here is a good demo of pitching down and mics https://www.youtube.com/watch?v=e093pWoWCBs
But for usual mixing and recording etc I have used 48kHz for quite a while as most of my commissioned work is for video where 48kHz is the standard.
thank you eddy ... checking it out as we speak ! smashing laptops ! hahahha
bit off topic... but I've always wondered if the most common is 44.1k or 48k? And why?
44,1k was the 'Beginning' and get the standard for CDs:
Quote:
Why not 44, or a nice round number like 50. When the first engineers were inventing digital sound they had worked out the on/off, 0/1, idea and needed a way to represent it. The idea came to use white dots on a TV screen where a white dot was on and a black dot was off. Neat. So you record it like a video picture on a video recorder. That was fine, but the engineers had been caught out before. What about PAL (European video standard) and NTSC (American and Japanese standard.) They weren't going to get caught up in that again, no way, so they configured a number that was compatible between the 528 line NTSC and 625 line PAL and the number was 44.1kHz. Just a piece of useless info you might want one day!