View Single Post
Old 11-07-2010, 06:00 PM   #16
Human being with feelings
Join Date: Aug 2006
Posts: 2,012

Originally Posted by kelp View Post
OK, I wish to mix my new smash hit. And I wish it to have the highest possible "warm" "analog" sound quality. It consists of multiple tracks of a -20dBFS RMS pink noise WAV file, uncorrelated no less, at 44.1kHz. (thanks, Bob Katz) Trust me. This song is going to be awesome.

Anyway, I add the first track, with the fader at 0, and say "Aha!" "This track is peaking at around -11.5dBFS. It's too hot!" So, I access the media item properties and bring the gain down -8.22dB. There. That looks better.

Now, on the master track I've inserted "JS: Meters/vumeter" "JS: Meters/dynamics_meter" and "JS: Liteon/vumetergfx" Pretty! Everything looking good!

But I have unlimited tracks! Let's use them! Now I duplicate the track seven times, for a total of eight tracks. The meters are showing about -13 RMS and about -1.5 peak. Hey, I even left a little headroom for the mastering engineering! This is guaranteed to sound fantastic and super-analog!

OK. Thanks for indulging me. That was pretty ridiculous. But is this the technique (oops! I mean "rule") we're talking about?

Project file attached...
I don't know whether this is a problem on my end or yours, but I don't seem to have any audio with your test project.

That said, in a conceptual sense, it sounds like the makings of a smash hit. I would suggesting recording 120bpm quarter-note hits of an 808 kick sample, then triggering hard compression on the white noise instrumental track with the kick keyed to the side-chain input to get that big pumping club effect. Then find a 16-year-old girl to sing some foul-mouthed "get back" lyrics about how hot she is, apply autotune liberally and you should be all set (after multiband look-ahead limiting to within -3dBFS, of course). The white-noise genre is huge right now.

Having said all that, I think you might be barking up the wrong tree with your test example. You're not going to get any difference in sound quality by gaining up or down within REAPER, or any other modern floating-point audio engine.

Allow me to try and re-state what I think you're referring to...

Going back to analog (forget about digital for the moment), we have a needle to thread when it comes to electrons moving across copper wire and through vacuums and around iron transformer cores and the like. The more copper and iron and power and so on, the more noise (in the form of random movement of surrounding electrons) we get. So there is always a sliding-scale tradeoff between noise and headroom: the more power-handling (i.e., headroom) we add, the more noise we introduce. And expanding that usable "window" between noise and distortion increases cost exponentially.

So what analog manufacturers did, over the course of many decades and largely driven by the needs of the telephone company to deliver clean audio over thousands of miles of thin copper cable, was to settle on a certain "average" signal level that devices were supposed to be built "around", so to speak. The precise spec depends a little bit on what country you're in and where your gear was made and so on, but the idea was always to design and manufacture gear that was intended to sound best within a certain input range, and to deliver a similar output range (don't quote me, but I *think* the US standard was 1 volt=0dBu ((or dBv, as it is sometimes called)), whereas the UK was keyed to current instead of volts, and therefore something like 1.23V at 0dBu). Anyway, the idea was that every device should basically input and output the same electrical signal at it's "ideal" operating range.

Now, analog manufacturers had and still have very wide latitude to decide what "ideal operating range" means. Some, such as telephones, have very low headroom and narrow bandwidth, and are designed to crank out the maximum signal in the upper-midrange, for maximum speech intelligibility over the noise floor of 3,000 miles of copper cable. Others, such as the unbalanced passive EQs favored by some mastering engineers, are designed to have the simplest, cleanest, and most linear possible circuit for use in short-run, noise-shielded environments. In both cases, the ideal "operating range" is determined by the expected input and output signal strength, measured in terms of average voltage or current.

Now, the nature of analog is that there are no "hard" cutoffs. Noise dissipates gradually, but never completely. Saturation similarly occurs gradually. There is probably no better example of this than when a music-shop owner in postwar London named Jim Marshall decided to make knockoffs of American guitar amplifiers, which were expensive due to exchange rates between then-depressed England and the booming American post-war economy. He copied the circuit from a Fender Bassman, using cheaper locally-available tubes, and sold them in his shop under his own name. A guy named Jimi Hendrix (American, as it happened) came in and discovered that by cranking the volume to overload these cheaper European tubes, he could get a sound he enjoyed even better the "cleaner" more "hifi" sound of the Fender originals.

Mr. Hendrix went on to have some success as a popular entertainer, and Jim Marshall's knockoffs have even attracted some customers who could afford a lower-distortion Fender Bassman.

The point is that analog deals in ranges, not fixed thresholds where pristine sound crosses the line into ugly clipping. And analog manufacturers design equipment to function around these "ranges", and have broad latitude to do so. Some analog devices, such as Neve or Trident preamps, are regarded as performing very favorably when "pushed hard", much like Jim Marshall's amplifiers. Others offer lots of clean headroom for transients but start to sound strangled, fizzy, or bad when average signal levels get much above zero on the meter. Which is another thing about analog: there is a lot of art, not just science, to designing analog audio circuits. Do you favor clean, accurate headroom for transients or a more "punchy" or "firey" saturation? This all comes into play not just with how you design the circuit, but HOW YOU SET THE METER IN RELATION TO THE CIRCUIT. The designer is basically building a circuit to sound good around zero dB on a fixed meter...

Digital, on the other hand, is pretty all-or-nothing. Modern high-resolution digital is basically "perfect" within it's operating range, and then craps out completely past it. This is why digital meters are keyed to "peak" level, or the maximum level that the digital system can handle. There's nothing wrong with that, and it is absolutely the correct way to meter digital audio. EXCEPT...

Before it gets to be digital, the audio starts out as analog. Which means, if you are solely using digital meters, there is a lot of potential for the analog front-end to crap out before the digital system ever gets a chance to meter it. There is also the problem of internal digital processors-- are you certain that every plugin you're using has good floating-point internal calculations? And if so, what about your saturation/compression/analog-ifier/guitar effects? How do they know when to start "saturating"? What about the cutoff filters on your AD and DA converters? What about inter-sample clipping that the digital system cannot detect?

this stuff is not always obvious to hear. Your clip LED will light up when you get that screechy white-noise "deep" digital clipping, but you'd probably notice that anyway. This kind of "hidden" clipping tends to have a more subtle effect, just flat-topping the waveform peaks and creating a harsh, brittle, "digital" sound. It's not always easy to hear in the thick of a recording session, but a little on the bass, a little on the high-hats, a little on the snare, a little on the kick, and next thing you know, you've got a "flat", "cheap", "digital"-sounding mix and you're out shopping for ribbon mics and tube preamps.
yep is offline   Reply With Quote