|
|
|
11-03-2015, 04:24 PM
|
#121
|
Human being with feelings
Join Date: Oct 2011
Location: Dalriada
Posts: 13,367
|
All the competing explanations and concepts in this thread are arriving in my brain at different intervals, sometimes cancelling each other out.
Re propagation and travel of sound - Maybe someone could draw an analogy with one or more trains, with carriages of different sizes (representing the different frequencies) - which, on arriving near the station (the ear) - had all to observe the same speed limit?
Or something
And my favourite bit so far :
Quote:
Originally Posted by Mr. PC
So just to clarify.
Phasing doesn't move the entire signal together, but can time-shit different frequencies different amounts.
|
|
|
|
11-04-2015, 05:34 AM
|
#122
|
Human being with feelings
Join Date: Aug 2014
Posts: 11,044
|
Quote:
Originally Posted by viscofisy
And my favourite bit so far :
|
Haha!
That's the worst kind of phase distortion, it takes ages to clean out of the faders.
|
|
|
11-08-2015, 09:25 AM
|
#123
|
Human being with feelings
Join Date: Apr 2010
Location: Cloud 37
Posts: 1,071
|
And it should be noted I have no idea what I'm talking about. I'm just a composer trying to understand myself what's happening.
Now I'm developing a kind of EQ phobia, and starting to hear phantom thinness every time I high pass.
|
|
|
09-07-2016, 07:28 AM
|
#124
|
Human being with feelings
Join Date: Mar 2012
Posts: 4
|
Sorry to bump this thread, but it came up during a google search.
As an Electrical Engineer that records as a hobby and writes vsts for fun, particularly convolution/eq stuff, I want to clear up a few points in this thread.
Phase is relative, and it can be relative to it's own harmonics/fundamental. but the mathematical transforms assume this to not be the case as it needlessly complicates the intuition of the system to think of it this way. That being said, the phase plot you see in reaper is the amount of phase that will be summed into the signal, which does not equate to the output being delayed by that amount. What it does mean, is that the SIGNALS are summed, not phase. This is what actually creates the filtering--the de-constructive and constructive effects of the phase changes. You can hear the phase change, because it's relative to the original it's mixed with. There are, in fact, two signals being discretely summed.
With respect to transients and high pass filters, transients have such a fast rise time that they'll be FAR above any high sub 40 Hz high pass you employ. You will be delaying the sub frequencies substantially, although it begs the question why you would care if you can't hear them unless you are just dropping them by less than 12 dB. In that case, the low end "transients" that are coincidental with the higher frequencies will still not substantively interfere with the higher end transients. This holds especially true with percussive instruments.
I still high-pass with extreme caution. Everything needs to be in context. I think we all agree on that. The ends totally justify the means with respect to production.
|
|
|
09-07-2016, 07:37 AM
|
#125
|
Human being with feelings
Join Date: Aug 2014
Posts: 11,044
|
Quote:
Originally Posted by rectifryer
Sorry to bump this thread, but it came up during a google search.
As an Electrical Engineer that records as a hobby and writes vsts for fun, particularly convolution/eq stuff, I want to clear up a few points in this thread.
Phase is relative, and it can be relative to it's own harmonics/fundamental. but the mathematical transforms assume this to not be the case as it needlessly complicates the intuition of the system to think of it this way. That being said, the phase plot you see in reaper is the amount of phase that will be summed into the signal, which does not equate to the output being delayed by that amount. What it does mean, is that the SIGNALS are summed, not phase. This is what actually creates the filtering--the de-constructive and constructive effects of the phase changes. You can hear the phase change, because it's relative to the original it's mixed with. There are, in fact, two signals being discretely summed.
With respect to transients and high pass filters, transients have such a fast rise time that they'll be FAR above any high sub 40 Hz high pass you employ. You will be delaying the sub frequencies substantially, although it begs the question why you would care if you can't hear them unless you are just dropping them by less than 12 dB. In that case, the low end "transients" that are coincidental with the higher frequencies will still not substantively interfere with the higher end transients. This holds especially true with percussive instruments.
I still high-pass with extreme caution. Everything needs to be in context. I think we all agree on that. The ends totally justify the means with respect to production.
|
Glad you resurrected it with some technical know-how!
What about passing say, a bass drum through multiple HP filters? Say you've got one on the channel, one on the drum bus, one on the mix bus and then one on the subsequent master? That's not an entirely uncommon scenario these days.
I've been sure I could hear a smearing of transients on bass drums when put through several HP filters, and the waveforms seem to back that up.
Thoughts?
|
|
|
09-07-2016, 07:47 AM
|
#126
|
Human being with feelings
Join Date: Mar 2012
Posts: 4
|
Quote:
Originally Posted by Judders
Glad you resurrected it with some technical know-how!
What about passing say, a bass drum through multiple HP filters? Say you've got one on the channel, one on the drum bus, one on the mix bus and then one on the subsequent master? That's not an entirely uncommon scenario these days.
I've been sure I could hear a smearing of transients on bass drums when put through several HP filters, and the waveforms seem to back that up.
Thoughts?
|
Can you describe the "smearing?" When passing through multiple filters, you will get harmonics from the filtering action itself. This will look like a delayed transient. I have seen this in the DAW as well as when measuring the frequency response of a room after adding a hemholtz resonator (which operates in an analogous manner with what we're dealing with.) IT will typically add a sub-harmonic and a 2nd order.
It's good you're looking at the wave forms. The eyes can see that which audibly we can only subconsciously perceive.
|
|
|
09-07-2016, 07:51 AM
|
#127
|
Human being with feelings
Join Date: Aug 2014
Posts: 11,044
|
Quote:
Originally Posted by rectifryer
Can you describe the "smearing?" When passing through multiple filters, you will get harmonics from the filtering action itself. This will look like a delayed transient. I have seen this in the DAW as well as when measuring the frequency response of a room after adding a hemholtz resonator (which operates in an analogous manner with what we're dealing with.) IT will typically add a sub-harmonic and a 2nd order.
It's good you're looking at the wave forms. The eyes can see that which audibly we can only subconsciously perceive.
|
You're right to pick up on my use of the word "smearing", as I'm probably abusing a technical term
I mean that the attack seems to be attenuated, so it sounds a bit more pillowy. I posted some examples in a thread somewhere around here, I'll have to have a look, but I'm just about to leave the house until next week so it won't be for a while.
|
|
|
09-07-2016, 03:57 PM
|
#128
|
Human being with feelings
Join Date: Dec 2012
Posts: 7,272
|
IIRC, I deliberately avoided this thread when it was new because they tend to devolve quickly. This one actually got pretty interesting, though.
A couple things:
If we really wanted to see (or hear) the effect of the phase shift by itself, then I think we should maybe isolate that in our tests. Replace the high-pass an all-pass set to the same cutoff and you get to analyze the effect of the "smearing" without the influence of the actual filter roll off and/or resonant whatevers.
Now, there's some question above about whether phase rotation (all-pass filter) on a single source will make any noticeable difference and the answer to that is kind of "it depends". Depending on the particular source it can be somewhere between imperceptible and subtle-but-real.
What tends to happen is that the "phase smearing" will tend to redistribute the energy of the waveform so that while it's overall average power is the same, the total peak-to-peak swing can be lower. If you look at clepsydrae's animation carefully, you can kind of see that happening. That is a pretty severely asymmetric waveform to begin with, and actually looks like it's got a DC offset to it, but as the all-pass sweeps, you see that DC offset go away, and the waveform center better around the 0 line, but also it appears as though the waveform itself is getting slightly smaller. This will have a real and noticeable effect that is more "felt" than "heard", but more importantly will affect the headroom and change the way that it hits dynamic processes downstream.
You can see this effect pretty easily. Drop in some asymmetric waveform as an audio file. Open Item|Properties, hit Normalize, and make note of how much gain it wants to apply. Now reset that slider to 0 and get out of there. Insert ReaEQ and set any one band to an all-pass with frequency set around the fundamental of your wave. Render that one way or another so that the effect is actually applied to the waveform. Now go back to Item|Properties, hit Normalize, and you should notice that the gain number is bigger. You're able to apply more gain before clipping because it's less likely that one side of the wave will hit the rail before the other.
In fact, this is an important part of the processing chain in most radio broadcasts, and is the "secret weapon" for getting that big, intimate, smooth, larger-than-life "radio voice". Most voices - especially male voices - are pretty darn asymmetrical, and phase rotation does that thing and makes them much more symmetrical. This comes out sounding a lot like a very subtle and transparent compression, but also makes any compression and/or limiting that you apply afterwards work more consistently and possibly not quite so hard overall. This is one thing that I actually do by default - all-pass all vocals - and it tends to make them a bit easier to work with, manipulate, and sit into a mix.
Now to the actual OP from a slightly different direction: Good analog circuit designers do actually "high-pass by default". Just about every piece of gear you might ever plug through will have (at very least) a capacitor in series with both input and output in order that the circuit inside can maintain its own DC conditions without having to worry about affecting or being affected by other gear in the chain. Basically it's about both "protecting ourselves" and "being a good neighbor", but it amounts to at least two high-pass filters in the chain for each piece of gear. Of course, it's generally good practice to have these DC-blocking (properly AC-coupling) caps between stages in more complex circuits so there could be anywhere between several and tens or more in the path through any given hardware box. That's not to mention the fact that many gain stages are bandlimited by design. Granted, these components are usually chosen so that they roll off way below the audio range, but it does add up after a while.
Now Reaper's mix engine does none of that on its own. It is completely DC coupled all the way through so that DC offsets and extremely low frequency noise caused by certain processes will get through, and add up, and get worse if we add gain, and... Many plugins are also DC coupled. Some can cause DC offset or asymmetry (which is about the same thing) and others can introduce low frequency subharmonics or even leak low frequency control-type signals into the audio stream. For these reasons, I do tend high-pass - usually way down at 20Hz - pretty early and often.
|
|
|
09-08-2016, 01:36 AM
|
#129
|
Human being with feelings
Join Date: Jun 2011
Location: Belgium
Posts: 5,246
|
Thanks, Ashcat_it. That sums it up nicely and answers some questions that had been lingering in my mind for ages. I even forgot some, but they came back
it made me understand the ITB - OTB discussion.
__________________
In a time of deceit telling the truth is a revolutionary act.
George Orwell
|
|
|
10-13-2016, 03:46 AM
|
#130
|
Human being with feelings
Join Date: Sep 2011
Posts: 61
|
There were some guys on the first page talking about HPF on guitars below 80-100Hz. Definitely wouldn't do that by default. Far better to use a spectrum analyser to pinpoint the exact frequency causing the boominess in your guitars and do a precise cut at that frequency. There is some warmth at the lower frequencies. You can always HPF a bit lower, say 40-50Hz and slowly work your way up.
People often say things like "humans can't hear anything below 50Hz, so just cut it". When in reality that thick bass sound which makes your chest groove and vibrate at concerts & clubs is at those really low frequencies. Decent subs can deliver those thick lows. Don't cut them by default!
|
|
|
10-13-2016, 05:00 AM
|
#131
|
Human being with feelings
Join Date: Mar 2014
Location: germany
Posts: 196
|
"it depends" ... as always.
My own cab refuses to work below 110Hz, so it doesn´t need a hi-pass filter. Just a cut of few dB at C.
Bandmate´s cab goes down to 80Hz. I definitely hi-pass this one. There is an E-Bass taking care of this area, and it sounds a lot more defined with hi-passed guitar.
Other sources can make good use of a clean hi-pass as well. But I would never castrate the bass. Some say "most listeners can´t reproduce anyway" blah blah ... I can reproduce, and some others as well. Why should I worsen the listening experience for them?
|
|
|
10-13-2016, 06:37 AM
|
#132
|
Human being with feelings
Join Date: Aug 2007
Location: Near Cambridge UK and Near Questembert, France
Posts: 22,754
|
As a bass player who also plays guitar, you are right on the money. One of my biggest nightmares is guitarists who want to have a reall fat sound, to the point where between that and the distortion or overdrive, it completely locks all the meaningful content of the bass guitar out of what should really be its assigned area in a mix. Then add in the drummer who wants a kick that goes "doof" till your fillings rattle and the bass player might as well be playing a Ukelele.
Fin de rant.
Merci
__________________
Ici on parles Franglais
|
|
|
10-13-2016, 07:44 AM
|
#133
|
Human being with feelings
Join Date: Mar 2014
Location: Louisville, KY, USA
Posts: 1,075
|
Quote:
Originally Posted by danbb
There were some guys on the first page talking about HPF on guitars below 80-100Hz. Definitely wouldn't do that by default. Far better to use a spectrum analyser to pinpoint the exact frequency causing the boominess in your guitars and do a precise cut at that frequency. There is some warmth at the lower frequencies. You can always HPF a bit lower, say 40-50Hz and slowly work your way up.
People often say things like "humans can't hear anything below 50Hz, so just cut it". When in reality that thick bass sound which makes your chest groove and vibrate at concerts & clubs is at those really low frequencies. Decent subs can deliver those thick lows. Don't cut them by default!
|
Quote:
Originally Posted by MrBongo
"it depends" ... as always.
|
And, what it depends on is: Are there other sources of bass frequencies that are more important?
If you're recording a soloist guitar then you would not want to high-pass it off-hand. It's bass content is important for that recording.
But, if there is a bass guitar playing too, then the bass content of the guitar is no longer important. You can completely cut the fundamental frequency of the guitar's bass notes and the listener will still know which note was played base on the harmonics. This leaves room for the bass's fundamentals to come through clearly without be smeared by what the guitar is doing.
Ultimately, arrangement and melody/octave selection by accompanying instruments should be used first, if possible, to keep too many infractions from occurring between instruments.
If you don't like the sound of the HPF then use the Low Shelf instead. Consider the fixed frequency of the Low shelf on a lot of live-use mixing boards. It may be 80 Hz. But, the Low Cut may be at 80 Hz too. They will still sound different based on their slopes.
I have several PA systems and multiple subsoofers. None of my PA systems can reproduce below 45 Hz and my other subs can't reproduce below 35 Hz. If you don't HPF your guitars then you are doing a serious disservice to the bass player and your going to make the kick and floor tom sound shitty too.
If you have bass instruments in the multitrack and you want them to sound clear and punchy YOU WILL HIGH-PASS ALL OTHER INSTRUMENTS that have nothing important occuring in the bass area.
|
|
|
10-13-2016, 09:37 AM
|
#134
|
Human being with feelings
Join Date: Dec 2012
Posts: 7,272
|
I mean, ideally you just wouldn't record those bass frequencies at all. Instrument choice, amplifier settings, mic selection and placement and such things will usually do better than any filter could. If you get it right at the source, you don't have to mess with it in the mix.
The lines get blurry, though, when your "source" is actually a chain of plugins. Whether it's amp sims or VSTis, sometimes we just don't have as many options to really tailor the sound to fit the mix and we have to reach for the EQs.
And yes some of us are mixing things that other people have recorded, or for whatever reason we don't have much say in how the source is captured. At these times - especially when somebody else tracked it - I think we first have to evaluate our own position with respect to the original vision of the tracking engineer. When you apply those filters, is it because it really needs it, or are you somehow missing the point? But if course if the tracks really just suck, we do what we must.
|
|
|
10-13-2016, 11:00 AM
|
#135
|
Human being with feelings
Join Date: May 2009
Posts: 29,260
|
Quote:
But, if there is a bass guitar playing too, then the bass content of the guitar is no longer important. You can completely cut the fundamental frequency of the guitar's bass notes and the listener will still know which note was played base on the harmonics.
|
Just an overall note that it's usually more dependent on what and when which is playing, arrangement etc. Sure in many cases, the bass should be providing it (that's why instruments cover ranges). The only point here is use context for the mix at hand rather than the rule. IOW, there are plenty of entire songs where the guitar part never reaches 'down there' so there is no need for HPF; the content isn't there to begin with. For all we know they never went below 220 Hz/A. Contrary to popular belief all guitar songs aren't in E.
Secondly (which I see ashcat is also inferring), if we are concerned enough to debate HPF for months, we'd be better served to understand the source, the foresight and knowing ahead of time how all of this fits together, then we aren't needing to do any of this or at least very little because we took care of it as part of the recording process at the source.
That's also a good hint as to how seasoned the musicians you are working with are, if they are well-seasoned there is a good chance their composition, orchestration and a even instrument choices were driven by things fitting sonically. Whereas less experienced may be a bunch of chaps just playing individual parts they are emotionally tied to but possibly not the best choice to serve the sonics of the song.
All of the above is why we don't grab the HPF control just because. I know you know, just trying to keep some sanity for the newbs.
__________________
Music is what feelings sound like.
Last edited by karbomusic; 10-13-2016 at 11:29 AM.
|
|
|
10-15-2016, 08:21 AM
|
#136
|
Human being with feelings
Join Date: Mar 2008
Location: Switzerland
Posts: 125
|
Quote:
Originally Posted by karbomusic
I'd hate to ignore a concept simply because they don't have f'ing awesome mixes to back it up, that's sort of unintentionally lame if you think about it. Half the technical audio concepts anyone here knows about is from people who couldn't actually mix their way out of a wet paper bag.
|
Well said. That's the crap logic behind people citing "their" works mixing and producing famous bands and everyone thinks they know what they're talking about and are good teachers of their craft. That's just plain wrong. Read some of Owsinski's books for proof: half the people he interviews just give out tips and tricks which, taken out of context, are completely useless.
Doing is one (respectable) thing, teaching / giving advice is another.
__________________
vinnie2k, a.k.a. The Old Man
Instagram: @old.man.muzeek
Follow me on Bandcamp
|
|
|
10-15-2016, 08:30 AM
|
#137
|
Human being with feelings
Join Date: Mar 2008
Location: Switzerland
Posts: 125
|
Quote:
Originally Posted by clepsydrae
And to note, if you are concerned about phase: high pass filters do most of their phase shifting in the area that you are cutting out anyway. Low shelf filters are similar... I'm not saying that it's insignificant, it may well be (in a multi-mic situation), but a high-pass filter isn't necessarily going to "mess up your phase". The video linked above just looks at the resulting waveform changing and says "see, your phase has been all shifted", which is not a valuable way to analyze the situation. Unless you're multi-micing a kick drum (rare), or for some other reason care that the sub-bass frequencies on your multi-mic'ed instrument be perfectly in-phase, a judiciously-used high pass is a fine way to remove low-end junk, IMO.
But certainly agreed that handling of low end frequencies is a major issue in mixing, and not as simple as "high-pass everything".
(image cut)
For the people who support the idea of "only results count": even if clep had never mixed a single record in his life, I would read his stuff with interest and follow his advice. Why? Because what he says is logical, scientific, reasonable. It explains stuff, It answers the "Why" question another poster was referring to. He's not saying "Believe ME, I worked with U2 and the Stones". Doing what, preparing tea? :-)
(image cut)
|
To the fans who support the "only results count" logic: even if clepsydrae has never mixed a single record in his life, I would follow his advice. Why? Because what he says is logical, scientific, reasonable. He answers the "Why" question the other poster was referring to. He doesn't say "Hey listen to ME, I worked with U2 and the Stones". Doing what, rolling fatties? :-)
__________________
vinnie2k, a.k.a. The Old Man
Instagram: @old.man.muzeek
Follow me on Bandcamp
|
|
|
10-27-2016, 08:33 PM
|
#138
|
Human being with feelings
Join Date: Apr 2014
Location: Texas
Posts: 305
|
Quote:
Originally Posted by Judders
One of my pet peeves with some metal mixing is that tupperware drum sound
|
Tupperware beats those damned typewriter kicks a lot of death metal uses. I get your point though.
|
|
|
Thread Tools |
|
Display Modes |
Linear Mode
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT -7. The time now is 01:55 AM.
|