Go Back   Cockos Incorporated Forums > REAPER Forums > Recording Technologies and Techniques

Reply
 
Thread Tools Display Modes
Old 10-16-2015, 10:16 AM   #81
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,260
Default

Quote:
As low frequencies travel slightly faster than high frequencies and as air absorbs high frequencies more readily than low ones
I'm having a difficult time working that out in my head if only considering sound in air. I'm fine with highs being attenuated with distance because they are but if a frequency is slowed down, the frequency aka pitch would change. Of course if I was running away from the speakers really fast that would happen via Doppler effect because my moving away effectively slows the speed and increases the wavelength.
__________________
Music is what feelings sound like.
karbomusic is offline   Reply With Quote
Old 10-16-2015, 10:19 AM   #82
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by karbomusic View Post
I'm having a difficult time working that out in my head if only considering sound in air. I'm fine with highs being attenuated with distance because they are but if a frequency is slowed down, the frequency would change. Of course if I was running away from the speakers really fast that would happen via Doppler effect.
Doppler effect! [edit: I'm "doppler effect" wasn't in your post when I replied...]

I'm not sure this is what you meant, but it's not slowing down the frequency in terms of Hz, but delaying certain frequencies more than others, which messes with the way our auditory systems have evolved to judge distance in sound.
Judders is offline   Reply With Quote
Old 10-16-2015, 10:27 AM   #83
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,260
Default

Quote:
Originally Posted by Judders View Post
Doppler effect!

I'm not sure this is what you meant, but it's not slowing down the frequency in terms of Hz, but delaying certain frequencies more than others, which messes with the way our auditory systems have evolved to judge distance in sound.
I can see that in a crossover due to the phase used to do the 'crossovering' for each band and I can understand it with drivers not being time aligned but having a hard time making it work in air alone. Open to learning, just not adding up for me just yet.

Speaking of Doppler, I've noticed it when removing my headphones quickly, crazy stuff.
__________________
Music is what feelings sound like.
karbomusic is offline   Reply With Quote
Old 10-16-2015, 10:30 AM   #84
insub
Human being with feelings
 
insub's Avatar
 
Join Date: Mar 2014
Location: Louisville, KY, USA
Posts: 1,075
Default

Quote:
Originally Posted by Multibomber View Post
Well... I do record a full drumset in my bedroom with 11 mics: 2 kicks, each with an Audix D6 inside it and a homemade subkick outside of it, 5 toms, each with an internal mic, 1 snare with an internal mic, and one overhead pointed straight down... and I dont know anything about phase other than I understand what comb filtering is because I read an article about it yesterday. What kind of phase problems should I be looking for?

In addition, I use the same guitar tones, plugins, EQ etc for my guitars HOWEVER the guitars are hard-panned L and R (so they're completely separated and they're harmonizing with each other 95% of the time. Do I get a free pass from phase issues with this approach?
Multi, you have a complex situation with using such setup (not wrong). The simplest form of understanding phase with drums is Snare top and bottom mic. They are approximately 180° out of phase with each other because one is pointing straight down at the drum and one is pointing straight up, theoretically. So you activate the phase flip button typically on the bottom snare because the top snare mic is already pointing the same direction as the overhead mics. This explanation is way over-simplified because the reality is that the top snare skin and bottom skin don't vibrate in perfect synchronization and the overheads are farther away so they could land anywhere in the waveform depending on distance and per frequency. If this were not true then the bottom snare and top snare would completely cancel each other out leaving only the sound of the snare itself minus the drum which does not occur in reality.

Kick drums lay sideways in relation to overheads. And, once you use internal mics then the phase relation to overheads is different still vs mic'ing from outside the drum.

So, this begins to ask how are you using the overheads? Are they solely to capture the cymbols & hats, or are they the main drum sound with individual drum mics to round it out?

OH as Main sound: As you mix in each individual drum mic it will act similar to EQ. If your tom is 180° out of phase with the OH then as you bring the fader up you will actually reduce the volume of that tom. However, if it is perfectly in-phase then as you bring up the fader the tom will become louder which is what we want. But, any phase relation in-between will have differing effects on the sound. Of course, this is not what we actually experience because the distance is different and almost always the microphone is different and different frequencies will lose energy at different rates as they move through the air, etc.

You can check this out by doing this:
1. Solo only the one overhead mic and your internal snare mic.
2. Make a time selection covering one snare hit. Time synch the waveforms so they start at exactly the same time (remove the delay of the overhead).
3. Load ReaEQ on snare and activate one band with the type set to all-pass filter.
4. As you change the frequency you are rotating the phase of the snare mic. Pay most attention to the frequencies between 200-2000Hz

What does it do?
While rotating the phase the sound of the snare should become thinner or more full-bodied as the two become more in-phase with each other in the frequencies that matter (meaning the central frequency and the range surrounding it). Notice it never completely cancels each other out because even though you time-aligned them every fequency is changing phase by a different degree. You can tell this by activating the show phase check box in ReaEQ. See? The phase rotation shifts less logarithmically as you get further from the filter frequency and the 180° shift point is offset from the actual filter frequency. I do not know why it is offset.

You could do this using the JS effect JS: Phase Rotator as well and the result should be similar. I do not know how these two methods differ, but it sounds the same to me. The Phase Rotator rotates all frequencies by the same amount of degree which does not simulate the effect of using EQ or HPF.

So, see how little this phase relation makes a difference on the mic from an overhead several feet away vs the internal mic which is closest to the source. Now multiply this by how much you think the phase relation of one single close mic'd instrument compares to the multiple overheads of an entire live orchestra many feet away. I'm not trying to say that the effect is unimportant. Just that overthinking the phase situation is somewhat a waste of time unless you understand what you want to do with the sound. Maybe a thinner sound from the one mic is actually desirable in the context of the overall mix. Distance changes everything.

The phase relation of your two guitars playing different riffs or even the same riff recorded at a different time will have a negligible effect on each other. The simple fact that each note will not be played in perfect synchronization totally throws phase concerns out. As you rotate the phase of one guitar you will hear nothing. No change at all. Some notes may become more in-phase while others will become more out-of-phase. Overall, you will not be able to tell the difference. On the other hand, if you duplicate a guitar track and rotate the duplicate track using the JS: Phase Rotator the volume will diminish until you reach 180° at which point the guitar will be muted. This is exactly how EQ works. Only we choose which frequencies to rotate with EQ rather than rotate them all.

So, my conclusion is that concerns regarding the phase relationship using HPF is rubbish. Even in the situation of multi-mic'd drum kit using a minimal-phase EQ (like ReaEQ) to HPF the snare the effect on the other drum mics is unknown. Some frequencies that remain audible could become more in-phase while others will become more out-of-phase depending which which other mic you are referencing. Even between which overhead (left/right, close/room). Meanwhile the portion which is filtered out will be gone and have no effect on the other tracks whatsoever.

My guess is that even the transient alteration concern is minimal, and that if you think the transient is suffering you can use the phase rotator to get it back. Besides, in my recordings the snare transient is more than one cycle long. So, what sounds are you recording that have an important transient that is less than one cycle? And, can you even tell if that half-cycle transient is attenuated. Also, you can use a transient enhancer to increase all the transients if you wish.

These are just my observations. Perhaps a real engineer can enlighten us. We have not even touched on how your room reflections will interfere with the phase of each drum track (likely none at all for internal mics but certainly for the overheads).
__________________
Everything you need to know about samplerates and oversampling... maybe!
My Essential FREE 64bit VST Effects, ReaEQ Presets for Instruments
Windows 10 64 bit; MOTU 828 MKII, Audio Express, & 8PRE; Behringer ADA8000
insub is offline   Reply With Quote
Old 10-16-2015, 10:40 AM   #85
insub
Human being with feelings
 
insub's Avatar
 
Join Date: Mar 2014
Location: Louisville, KY, USA
Posts: 1,075
Default

Quote:
Originally Posted by Stews View Post
Do people really high pass the master?

I thought the purpose of the "high pass everything" was so the bass guitar can occupy the entire low end?
I don't know about in professional mixing or mastering, but in live PA this is an absolute YES!

In my PA system, including the 18" subs, no speaker is capable of reproducing <40Hz. So, if there is any signal at all in that range then you are wasting your amp's power trying to push a speaker that is incapable. Thus you will reduce the overall volume of the entire PA system by not removing frequencies below your system's capability.

I never HPF the Master while mixing, NEVER. Let the Mastering Engineer take care of it.
__________________
Everything you need to know about samplerates and oversampling... maybe!
My Essential FREE 64bit VST Effects, ReaEQ Presets for Instruments
Windows 10 64 bit; MOTU 828 MKII, Audio Express, & 8PRE; Behringer ADA8000
insub is offline   Reply With Quote
Old 10-16-2015, 10:40 AM   #86
MrBongo
Human being with feelings
 
MrBongo's Avatar
 
Join Date: Mar 2014
Location: germany
Posts: 196
Default

i do highpass almost every track, and i do highpass the master.

currently got a kick drum that gets its sound out of 2 mics, because one of them is high-passed ~300Hz.
others are high-passed well below the actual usable fundamental, just to cut off the rumble, and it might increase headroom a bit too.
if you get phase issues on drums or other multi-mic-setups, and it bothers you, adjust the highpasses or move the tracks.

there´s also a plugin on my master tracks that introduces DC offset. a 10Hz lowcut just before the final limiting stages takes care of it
MrBongo is offline   Reply With Quote
Old 10-16-2015, 10:43 AM   #87
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by karbomusic View Post
I can see that in a crossover due to the phase used to do the 'crossovering' for each band and I can understand it with drivers not being time aligned but having a hard time making it work in air alone. Open to learning, just not adding up for me just yet.
Ah, I get'cha.

Yeah, I'm not sure about that. Will have to look it up later.
Judders is offline   Reply With Quote
Old 10-16-2015, 11:04 AM   #88
insub
Human being with feelings
 
insub's Avatar
 
Join Date: Mar 2014
Location: Louisville, KY, USA
Posts: 1,075
Default

Quote:
Originally Posted by clepsydrae View Post
And to note, if you are concerned about phase: high pass filters do most of their phase shifting in the area that you are cutting out anyway. Low shelf filters are similar... I'm not saying that it's insignificant, it may well be (in a multi-mic situation), but a high-pass filter isn't necessarily going to "mess up your phase". The video linked above just looks at the resulting waveform changing and says "see, your phase has been all shifted", which is not a valuable way to analyze the situation. Unless you're multi-micing a kick drum (rare), or for some other reason care that the sub-bass frequencies on your multi-mic'ed instrument be perfectly in-phase, a judiciously-used high pass is a fine way to remove low-end junk, IMO.

But certainly agreed that handling of low end frequencies is a major issue in mixing, and not as simple as "high-pass everything".



This post by far has been the most enlightening of this thread for me.

Notice: The phase smearing we are talking about for HPF is only in the range of the curve (area between cut-off frequency and 180° phase shift or absolute attenuation). And, this range is more or less depending on the steepness of the filter.

If none of your transient is in that frequency range then it should be unaffected. If your transient has changed then there must have been some energy in the frequencies that are now cut off. If some of that was below 20Hz then likely no human will know the difference. They couldn't hear it anyway. And, honestly, any sound between 20-45Hz is likely never to be heard on any commercial playback system as anything but speaker distortion.

If you're concerned about the phase issues then use a steeper filter.
__________________
Everything you need to know about samplerates and oversampling... maybe!
My Essential FREE 64bit VST Effects, ReaEQ Presets for Instruments
Windows 10 64 bit; MOTU 828 MKII, Audio Express, & 8PRE; Behringer ADA8000
insub is offline   Reply With Quote
Old 10-16-2015, 11:09 AM   #89
xpander
Human being with feelings
 
xpander's Avatar
 
Join Date: Jun 2007
Location: Terra incognita
Posts: 7,670
Default

Quote:
Originally Posted by karbomusic View Post
I can see that in a crossover due to the phase used to do the 'crossovering' for each band and I can understand it with drivers not being time aligned but having a hard time making it work in air alone.
I can't see it either. Afaik, all frequencies travel at similar speed through the same medium (eg. air). If we take phase and time alignment into consideration, depending on speaker design I'd go as far as saying that low frequencies may reach our ears later than high frequencies. Still not slower, it's a time delay, not travel speed difference.

Phase, Time and Distortion in Loudspeakers
http://sound.westhost.com/ptd.htm
xpander is offline   Reply With Quote
Old 10-16-2015, 11:12 AM   #90
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
HOWEVER the guitars are hard-panned L and R (so they're completely separated and they're harmonizing with each other 95% of the time. Do I get a free pass from phase issues with this approach?
If the guitars hard-panned are different takes, and it sounds like yours are, then yeah, no issue. If they are the same take recorded with two mics and panned hard, then you may still have issues. Hard-panning helps a lot, but the average stereo doesn't have anything near perfect separation and as a result blending will happen on playback. (But it sounds like you're doing different takes, so, no sweat.) I find that the average boombox preserves something like 20% of stereo separation when you stand 5 or 10 feet away from it. When I mix I'll use a width-reducing plugin to simulate, to check for phase issues and that the mix is still interesting.

A quirky live sound trick is to intentionally wire on-stage monitors with opposite polarity to prevent bass feedback. You hear less bass as a performer, though.

Quote:
Originally Posted by Judders View Post
Here's a quick n' dirty example I just cooked up myself.
+10 Internet points to Judders for an example!

I'm not hearing what you describe, though. I know it's annoying when people complain about posted examples, but I'd have liked to hear it without the tight Q causing the frequency bump at the corner. To my ears the increased low freq bump is masking the high frequencies a little which makes it sound like it has less edge, but I'm not hearing any "smearing". Maybe i'm just deaf to it.

Quote:
What you should notice is, even though the ripple around the corner frequency is raising the gain of the signal as it goes through more filters, the initial peak is reducing in amplitude.
While the initial peak is visibly reduced, i'm not sure it's relevant. In my foggy understanding, the standard notion of "transient" is more or less a myth, at least in terms of thinking of it as the "rising edge" of the waveform being something you can visually inspect. The real question is how much of each frequency band is present at a given time, and if there is low-freq content over the first few milliseconds, I don't think it matters much if it presents as a clear picture-perfect rising line. That said, I have heard (?) that one of the rare cases where absolute phase can sometimes be detected is positive vs negative speaker excursions with extreme sub frequencies at chest-thumping volumes, so i'm not saying it's totally irrelevant, just that i'm trusting my ears on this one.

Quote:
Originally Posted by timboz View Post
Dumb question.... but. Will a using a Linear-phase EQ eliminate the problem of phase shift in the EQ?
Yes. For a free plugin, I like SplineEQ.

Quote:
Originally Posted by Judders View Post
Yes, but then you have the possibility of audible pre-ringing (again though, it will only be audible in extreme cases). It's always a compromise.
Yeah, and this is something I have actually heard in a demo... here's a youtube video where linear vs minimum phase filters are demo'ed (minimum-phase being the "standard" type), and he shows how a linear phase EQ can slightly soften the edge of a kick drum with pre-ringing. IIRC he also shows how minimum phase filters in a multi-mic situation can cause phase interference and mess up (or help) a snare.

Quote:
Originally Posted by Magicbuss View Post
One place where minimum phase EQ can cause REAL world problems is during mastering. If you hipass a track that is near digital zero it can actually boost the signal into clipping due to phase distortion.
Yeah, good point, and note that any EQ, whether linear phase or not, can cause counter-intuitive increases in signal even if you are only cutting frequencies: instantaneous interference could be keeping the signal level under 0dBFS at a certain point, and you cut a frequency, and now there's a peak over 0 because the interference has been removed.

Quote:
Originally Posted by Judders View Post
"phase distortion : An effect caused when phase-shift in an audio device is not a linear function of frequency. In other words, different frequencies experience different time delays. This changes the waveform of the signal and is especially injurious to transients. Most transducers produce significant phase distortion. As low frequencies travel slightly faster than high frequencies and as air absorbs high frequencies more readily than low ones, the more delay there is between low frequencies and the higher harmonics of a sound, the sound becomes progressively more smeared and is perceived as more distant."
...but I suspect that "smearing" due to phase distortion that comes from arrival time differences is not properly the same as phase alteration without arrival time differences (e.g. minimum phase EQ). The latter, it would seem to me, doesn't 'smear'.

Quote:
I'm having a difficult time working that out in my head if only considering sound in air. I'm fine with highs being attenuated with distance because they are but if a frequency is slowed down, the frequency aka pitch would change. Of course if I was running away from the speakers really fast that would happen via Doppler effect because my moving away effectively slows the speed and increases the wavelength.
Yeah it's a confusing thing; as i understand it: all things being equal, if a low-freq wave traveled at a slower rate it would drop in frequency, but I think the idea is that while the propagation in the medium is slower/faster, the frequency of the vibration itself doesn't necessarily change. I.e. sound travels through water much faster than air (4.3x), but it doesn't make the pitch of everything you hear in water higher. The vibration gets to you faster/slower, but it's not vibrating faster/slower; it's not a waveform-shaped thing traveling along at different rates, despite the way we depict it in diagrams. I think the missing variable here is that both the propagation speed and the wavelength are different depending on frequency; and thus, in air, the ratio of frequency to wavelength (which is probably just a fancy way to say 'the speed of sound') is partially frequency-dependent. How/why that happens is beyond me, though.
clepsydrae is offline   Reply With Quote
Old 10-16-2015, 11:16 AM   #91
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by xpander View Post
I can't see it either. Afaik, all frequencies travel at similar speed through the same medium (eg. air).
It's a real thing... see e.g. the wikipedia page:

Quote:
The speed of sound in an ideal gas is independent of frequency, but does vary slightly with frequency in a real gas.
clepsydrae is offline   Reply With Quote
Old 10-16-2015, 11:29 AM   #92
Magicbuss
Human being with feelings
 
Join Date: Jul 2007
Posts: 1,957
Default

To see what I meant about phase distortion from a hpf overloading you mix buss check out this video. the relevant part is at about 9:00-11:00

https://www.youtube.com/watch?v=lfC16ksxGRU
Magicbuss is online now   Reply With Quote
Old 10-16-2015, 11:56 AM   #93
xpander
Human being with feelings
 
xpander's Avatar
 
Join Date: Jun 2007
Location: Terra incognita
Posts: 7,670
Default

Quote:
Originally Posted by clepsydrae View Post
It's a real thing... see e.g. the wikipedia page:
Thanks for the link clepsydrae.
Quote:
The medium in which a sound wave is travelling does not always respond adiabatically, and as a result the speed of sound can vary with frequency.
I cannot see a straight formula for this and won't debate about my "similar medium" being adiabatic. With no experience at such maths, I have to take it at face value. But, they actually say that bass may be slower, not faster like the earlier link suggested. It does say that in real life it may be insignificant though.

Quote:
The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about 0.1 m/s as the frequency rises from 10 Hz to 100 Hz. For audible frequencies above 100 Hz it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path.
xpander is offline   Reply With Quote
Old 10-16-2015, 12:12 PM   #94
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,260
Default

I just worked it out this way... If I take a blip sound and repeat it every couple seconds, it's just a blip, in reality it is a .5 Hz tone but I can't discern tone at such slow speeds, just blips. An off topic lightbulb might be going off about now concerning 20Hz being our hearing's cutoff point, as in the time between two waveform peaks is approaching the point we can discern individual blips.

Back on topic... If I speed the blip interval faster and faster I'll hear a pitch and the distance between those two blips reduces due to that increase in speed of blips 'occurring'. However, it doesn't matter if it takes 1ms or .001ms for the stream of blips to reach my ear because they are still being created at the same interval. IOW they only move from point A to point B more quickly yet the relative intervals between one another stay exactly the same so the pitch won't change at all. That being the case, I could see a different pitch propagating at a different speed and not changing pitch but are able to change phase relationships between themselves.
__________________
Music is what feelings sound like.

Last edited by karbomusic; 10-16-2015 at 12:19 PM.
karbomusic is offline   Reply With Quote
Old 10-16-2015, 12:13 PM   #95
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by xpander View Post
Thanks for the link clepsydrae.
I cannot see a straight formula for this and won't debate about my "similar medium" being adiabatic. With no experience at such maths, I have to take it at face value. But, they actually say that bass may be slower, not faster like the earlier link suggested. It does say that in real life it may be insignificant though.
Oh gotcha, i thought you were saying that the frequency dependency maybe didn't exist, as opposed to be irrelevant. I'd lean towards it being irrelevant, myself.

It's interesting that Judder's link implies that there is a psychoacoustic perception of phase smearing that is interpreted as distant; I'd be surprised if that was true and if our perceptual "distance" cues were based on anything but overall frequencies present, reverb present, etc. (Setting aside binaural perception, which does indeed use relative phase to stereo locate in a certain frequency range.) But it'd be cool, if true.
clepsydrae is offline   Reply With Quote
Old 10-17-2015, 01:28 AM   #96
Nystagmus
Human being with feelings
 
Nystagmus's Avatar
 
Join Date: Oct 2013
Posts: 509
Default

The only stuff I ever needed to high pass filter *sometimes* was some microphoned stuff where there's too much rumble. But professionals who do a lot of acoustic recordings probably have a mic locker full of a variety of mics. Better mic choice helps get rid of LF rumble because some mics have built-in filters and also some mics have a different frequency response that the engineer might be familiar with.

Usually if the mic is close enough to the audio source, there's less of an issue of rumble from other stuff. Contact mics might be different though.

But since I don't do acoustic recordings anymore, I have no need for high pass filterings most of the time. One exception is if the synth I'm using creates DC bias in the signal path, but only badly designed freeware synths typically do that, and I've quit using those years ago.

I do MIDI recordings using VSTi instruments. And I seldomly use reverb.
But that brings me to one place were high pass filtering might be useful...

High pass filtering deep and long and full size reverb can sometimes help clean up the sound, but it depends upon the reverb and it's settings. But even there, I just mean about 10% of HPF from within the reverb's settings.

Another place where I might use HPFing is within a synth patch to get a specific sound. But that's synthesis and not mixing.

I don't HPF tracks that don't need it. I've heard tracks that sound gutted and raspy and I think that's part of their problem--too much HPFing on every track. HPFing can do interesting things to some cymbal sounds, but I don't use cymbals much anyhow.
Nystagmus is offline   Reply With Quote
Old 10-17-2015, 01:36 AM   #97
Stews
Human being with feelings
 
Stews's Avatar
 
Join Date: Jun 2014
Posts: 1,392
Default

Does engaging the HP switch on a microphone also introduce this side effect?
Stews is offline   Reply With Quote
Old 10-17-2015, 01:40 AM   #98
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by Stews View Post
Does engaging the HP switch on a microphone also introduce this side effect?
I have asked the same question before and was told that analog filters behave the same way as minimum-phase digital filters, so assuming that's correct, yes, HP switches would introduce phase shift as well.
clepsydrae is offline   Reply With Quote
Old 10-17-2015, 03:55 AM   #99
bazsound
Human being with feelings
 
Join Date: Jul 2015
Posts: 237
Default

Quote:
Originally Posted by timboz View Post
When running Live sound FOH I tend to High pass / low cut almost everything but Kick and bass.
While recording/mixing in the basement studio it is only when needed.
i tend to hpf even kick and bass. depending on the kick and bass even if its only 30hz but if theres stuff down there its going to suck up a bit of power even if the system cant produce it and is hpf anyway on the processing.

neither way is right or wrong. seen plenty of engineers get fantastic sound with very little HPF on bass instruments
bazsound is offline   Reply With Quote
Old 10-17-2015, 04:17 AM   #100
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by Stews View Post
Does engaging the HP switch on a microphone also introduce this side effect?
As far as my knowledge goes (and that ain't far ), yes, it is not a problem with implementation, it is in the maths of the design.
Judders is offline   Reply With Quote
Old 10-30-2015, 01:48 AM   #101
Mr. PC
Human being with feelings
 
Mr. PC's Avatar
 
Join Date: Apr 2010
Location: Cloud 37
Posts: 1,071
Default

So just to clarify.

Phasing doesn't move the entire signal together, but can time-shit different frequencies different amounts. So for example the low-end might hit before the high-end, or vice versa.

Theoretically, if a square wave were phased enough, could it turn into an arpeggio?

Is there some consistent way of knowing *how* it will be phased? E.G. Low-first and high-later for a HPF?

If each instrument is recorded independently in Mono, phasing could possibly have some effect, but is generally negligible, and possibly beneficial?
__________________
AlbertMcKay.com
SoundCloud BandCamp
ReaNote Hotkeys to make Reaper notation easy/fast
Mr. PC is offline   Reply With Quote
Old 10-30-2015, 07:43 AM   #102
Magicbuss
Human being with feelings
 
Join Date: Jul 2007
Posts: 1,957
Default

Quote:
Originally Posted by clepsydrae View Post
I have asked the same question before and was told that analog filters behave the same way as minimum-phase digital filters, so assuming that's correct, yes, HP switches would introduce phase shift as well.
Yeah but again if its a vocal mic and you only have one mic on the vocal the phase shift isnt going to do anything noticeable. Phase is always relative to something else. It wont effect anything unless you have multiple mics recording the same source.

Put a phase rotator plug on a soloed snare drum and move it around... you wont hear any difference. Now add the bottom snare in solo and mess with the phase and it'll sound like a jet plane going by.
Magicbuss is online now   Reply With Quote
Old 10-30-2015, 09:05 AM   #103
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by Magicbuss View Post
Yeah but again if its a vocal mic and you only have one mic on the vocal the phase shift isnt going to do anything noticeable. Phase is always relative to something else. It wont effect anything unless you have multiple mics recording the same source.

Put a phase rotator plug on a soloed snare drum and move it around... you wont hear any difference. Now add the bottom snare in solo and mess with the phase and it'll sound like a jet plane going by.
That is shifting the phase of the entire signal, not different portions of frequency by differing amounts.

I've read stuff by engineers who believe some of the distinctive character of some mics is due to this kind of phase distortion. Particularly cheap condensers.
Judders is offline   Reply With Quote
Old 10-30-2015, 10:36 AM   #104
Nystagmus
Human being with feelings
 
Nystagmus's Avatar
 
Join Date: Oct 2013
Posts: 509
Default

If you high pass filter everything, you risk gutting yourself of many of the fundamental pitches of the instruments and just having harmonics. It might sound good on some instruments, but doing it on everthing without listening first is just reckless.

It's one of those things which might have applications if you mic everything acoustically with mics that don't have a proper bass roll-off. But if the track is already prepared and recorded properly, you won't need to filter stuff. And of course, if it's electronic stuff, you can adjust the synths directly instead of filtering. Again, if you are filtering everything without listening to check to see if it actually sounds good, you're just being reckless and messy and potentially trashing every good track just because of your audio OCD.
Nystagmus is offline   Reply With Quote
Old 10-30-2015, 11:01 AM   #105
LightOfDay
Banned
 
Join Date: Jun 2015
Location: Lower Rhine Area, DE
Posts: 964
Default

Quote:
Originally Posted by Judders View Post

I've read stuff by engineers who believe some of the distinctive character of some mics is due to this kind of phase distortion. Particularly cheap condensers.
yeah, well, some engineers say that it is important if speakers at a postive wave push out or pull in and vice versa for negative wave.

thats bullshit. (why do people that dont know shit about technicalities shut the f*ck up and do what they can better: making music.)

as said: phase is only important if its relativ to anything. look at he phase in ReaEq, dial in some allpass-filters, turn some knobs so that the phasemeter goes nuts and listen to it: no difference to bypassed ReaEq. and the phase is all over the place.

now double the track with the ReaEq and the allpass-filters, bypass ReaEq on the second track, and play them together: now is the sound all the place.

(allpass-filters do nothing but change the phase over the complete frequency-range. the higher the frequency, the more phase-shifting. right: put 6 of them in row, modulate their center-frequency and you have what? a phaser. try that with on allpass-filter.)
LightOfDay is offline   Reply With Quote
Old 10-30-2015, 11:46 AM   #106
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by LightOfDay View Post
yeah, well, some engineers say that it is important if speakers at a postive wave push out or pull in and vice versa for negative wave.

thats bullshit. (why do people that dont know shit about technicalities shut the f*ck up and do what they can better: making music.)

as said: phase is only important if its relativ to anything. look at he phase in ReaEq, dial in some allpass-filters, turn some knobs so that the phasemeter goes nuts and listen to it: no difference to bypassed ReaEq. and the phase is all over the place.

now double the track with the ReaEq and the allpass-filters, bypass ReaEq on the second track, and play them together: now is the sound all the place.

(allpass-filters do nothing but change the phase over the complete frequency-range. the higher the frequency, the more phase-shifting. right: put 6 of them in row, modulate their center-frequency and you have what? a phaser. try that with on allpass-filter.)
There have been a number of studies that show phase distortion can be audible.

"2.2 Temporal Processes
Timbre [2] is a multidimensionally perceived tonal attribute that differentiates tones of identical pitch, loudness, and duration. It is influenced by steady state waveforms, transient characteristics (the onset especially), and slower spectral changes over a series of tones. For example, a piano and a trumpet can play the note A440 of identical frequency, sound pressure, and duration but have clearly audible differences. Although it was once believed that the human ear is “phase deaf,” in accordance to Ohm’s acoustical law [2], more recent research has shown that relative phase has subtle effects on timbre, in particular when changing phase relationships occur within a continuously sounding tone."

- http://mue.music.miami.edu/wp-conten...aisukeKoya.pdf

http://www.silcom.com/~aludwig/Phase_audibility.htm

"Phase distortion is the alteration of timing relationships between input and output, and between different frequencies that exist simultaneously in a particular sound. This should be distinguished from a phase shift that is proportional to frequency, and which does not cause phase distortion (Langford-Smith, 1960). Phase distortion occurs in almost all hearing aids due to the use of capacitors and inductors for amplifying and tailoring the frequency response in the circuitry. For example, band-split filters, commonly used in hearing aids with signal processing in two frequency bands, produce large alterations in phase at the cross-over frequency. Though phase relationships between the two ears are important for localization of sound (Batteau, 1967; Rodgers, 1981; Blauert, 1983), the significance for a listener of changes in relative phase within a complex sound at a single ear are uncertain. The ear may be unable to detect phase shifts in continuous tones (Scroggie, 1958), though phase is important to undistorted reproduction of transient sounds (Langford-Smith, 1960; Moller, 1978a; 1978b).
- http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4172235/
Judders is offline   Reply With Quote
Old 10-30-2015, 01:36 PM   #107
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by Mr. PC View Post
So just to clarify.

Phasing doesn't move the entire signal together, but can time-shit different frequencies different amounts. So for example the low-end might hit before the high-end, or vice versa.
If we're talking about phase rotation, which changes the phase of frequencies "in place", then nothing changes in terms of when the various frequency ranges hit. It's kind of a mind bender that it's even possible, in my opinion, but it is. I think this animation does more to show the effect than anything:



Quote:
Theoretically, if a square wave were phased enough, could it turn into an arpeggio?
Phase-shifting a square wave just moves the phase, which makes it appear to move forwards and backwards, but without actually changing the location of where it "begins" (as demonstrated in the above gif with a sine-wave-like waveform).

Judders' recent posts are referring to a situation where part of the frequency spectrum is phase shifted differently than another part. Again, no change in "arrival time" of the sound.

Quote:
If each instrument is recorded independently in Mono, phasing could possibly have some effect, but is generally negligible, and possibly beneficial?
Generally negligible, not likely to matter at all. Changing the phase on one of two unrelated signals recorded separately (e.g. a singer and guitar player) will very likely exhibit zero quality yield, and by all received wisdom on the subject will sound identical to us. In a more contrived circumstance, e.g. separately recording steady sine tones from two different synths playing at the same frequency, it's possible that the mutual alignment of phase could be very important and adjustment might make a difference, but only over the life of that particular tone. Something like a kick drum against a bass is going to be uncorrelated enough to make phase adjustment irrelevant, and even if it was relevant you'd have to adjust each hit separately. (Again, talking about a no-bleed separate mono recording situation, here: if there is any bleed then phase relation will matter.)

Quote:
Originally Posted by Magicbuss View Post
Yeah but again if its a vocal mic and you only have one mic on the vocal the phase shift isnt going to do anything noticeable. Phase is always relative to something else. It wont effect anything unless you have multiple mics recording the same source.
Agreed.
clepsydrae is offline   Reply With Quote
Old 10-30-2015, 02:16 PM   #108
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by Judders View Post
There have been a number of studies that show phase distortion can be audible.
Fascinating stuff. Though i think we can agree that at most it's very subtle; meaning the idea that humans are "phase deaf" doesn't stem from a lack of investigating the question; tons of research (afaik) backs that up, but maybe they just haven't explored all the nuances yet.

Quote:
Although it was once believed that the human ear is “phase deaf,” in accordance to Ohm’s acoustical law [2], more recent research has shown that relative phase has subtle effects on timbre, in particular when changing phase relationships occur within a continuously sounding tone."
I looked for reference [2] and it's a textbook, without page numbers cited, but neat to know that there might be some investigation there. They do seem to note that it matters more when phase relationships change during a tone, which is of course a different thing.

Neat discussion there, too. Sounds like the going theory there is that distortion (in a speaker or in the ear) is what causes it to be potentially relevant, although it does reinforce that it's hard to produce the effect. Although the piano demo files are potentially confusing; the second is not just phase altered. Apparently the various frequency bands were randomly time adjusted, which is not only phase alteration but arrival time smearing, and doesn't seem to contribute much to the author's discussion.

(The Hawksford and Greenfield reference link is broken, but this seems to be the paper -- more about stereo perception than the issue at hand, afaict. Interesting, though.)

Quote:
The ear may be unable to detect phase shifts in continuous tones (Scroggie, 1958), though phase is important to undistorted reproduction of transient sounds
Intriguing, but it doesn't seem like their citations support the claim, or rather the key may be in their use of the term "reproduction", meaning, loudspeaker design: They refer to this paper, but the perceptible differences described therein refer to phase shift that results from actual arrival time shifts in loudspeaker reproduction: e.g. with bass frequencies arriving sooner or later than the midrange, etc, which of course is different from phase shifting in the pure sense, at least as far as my tenuous grasp of this stuff is concerned. This link is the other citation, by the same author, and just makes the same point.

As far as i'm aware, all phase shifting in MP eq's (analog and digital) happens without alteration of arrival time at all.

To me, the sum-up of the above seems to be that there is at least one paper (Lipshitz, abstract here) and also some reasoned discussion in support of the idea that changing the phase of a part of a signal can, in mostly contrived circumstances but in occasional real-world scenarios, be audible, due to distortion in the reproduction/perception, but that it's usually inaudible. Even the Lipshitz paper sums it up as: "It is stressed that none of these experiments thus far has indicated a present requirement for phase linearity in loudspeakers for the reproduction of music and speech." But interesting to know that there is some perceptible stuff in there somewhere.

Which, if correct, just brings us back to "try the high pass and see if it sounds bad". :-)

Thanks for the links.
clepsydrae is offline   Reply With Quote
Old 10-30-2015, 02:28 PM   #109
Mr. PC
Human being with feelings
 
Mr. PC's Avatar
 
Join Date: Apr 2010
Location: Cloud 37
Posts: 1,071
Default

Quote:
Originally Posted by clepsydrae View Post
Generally negligible, not likely to matter at all. Changing the phase on one of two unrelated signals recorded separately (e.g. a singer and guitar player) will very likely exhibit zero quality yield, and by all received wisdom on the subject will sound identical to us. In a more contrived circumstance, e.g. separately recording steady sine tones from two different synths playing at the same frequency, it's possible that the mutual alignment of phase could be very important and adjustment might make a difference, but only over the life of that particular tone. Something like a kick drum against a bass is going to be uncorrelated enough to make phase adjustment irrelevant, and even if it was relevant you'd have to adjust each hit separately. (Again, talking about a no-bleed separate mono recording situation, here: if there is any bleed then phase relation will matter.)

Agreed.
Ok, so in this case (completely no-bleed unrelated recordings) high-passing a signal that doesn't 'touch' the fundamental should have no negative consequence. For example, a high-pass on a flute at 80hz (and adding a non-steep bandwidth to avoid that hidden boost) should have no negative consequence.
__________________
AlbertMcKay.com
SoundCloud BandCamp
ReaNote Hotkeys to make Reaper notation easy/fast
Mr. PC is offline   Reply With Quote
Old 10-30-2015, 02:37 PM   #110
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by Mr. PC View Post
Ok, so in this case (completely no-bleed unrelated recordings) high-passing a signal that doesn't 'touch' the fundamental should have no negative consequence. For example, a high-pass on a flute at 80hz (and adding a non-steep bandwidth to avoid that hidden boost) should have no negative consequence.
Right - unless some of the esoteric effects Judders and i and others are discussing come in to play, but you can just listen to the result of the HPF and decide for yourself; there's nothing sneaky that's going to bite you later.

As far as this lowly internet commenter is aware. :-)
clepsydrae is offline   Reply With Quote
Old 10-30-2015, 03:48 PM   #111
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by clepsydrae View Post
Which, if correct, just brings us back to "try the high pass and see if it sounds bad". :-)
Oh, definitely.

But I do wonder if some of these engineers, who LightofDay thinks should "shut the f*ck up", might be on to something about microphone phase distortion having a subtle effect on timbre.

Quote:
Originally Posted by clepsydrae View Post
Thanks for the links.
You're welcome
Judders is offline   Reply With Quote
Old 10-30-2015, 03:52 PM   #112
LightOfDay
Banned
 
Join Date: Jun 2015
Location: Lower Rhine Area, DE
Posts: 964
Default

these studies show nothing contradicting what was said. there is mentioned "relative" phase. thats a whole different thing, as explained at least 5 times in this thread.
LightOfDay is offline   Reply With Quote
Old 10-30-2015, 04:00 PM   #113
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by LightOfDay View Post
these studies show nothing contradicting what was said. there is mentioned "relative" phase. thats a whole different thing, as explained at least 5 times in this thread.
From a single source.

Particularly the phase of the second harmonic in relation to the fundamental.

The pieces I linked to were not about multi-mic recording.
Judders is offline   Reply With Quote
Old 10-30-2015, 04:23 PM   #114
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by Judders View Post
But I do wonder if some of these engineers, who LightofDay thinks should "shut the f*ck up", might be on to something about microphone phase distortion having a subtle effect on timbre.
Yeah -- any links for that? I presume it's phase distortion from electronic/filters/etc? Or is there some magical alteration happening in the capsule or diaphragm itself? I recall that the delay-path stuff in capsules can be relevant to phase.

Quote:
Originally Posted by LightOfDay View Post
these studies show nothing contradicting what was said. there is mentioned "relative" phase. thats a whole different thing, as explained at least 5 times in this thread.
As Judders mentioned, one referenced study (Lipshitz) seems to describe cases where phase shift in single sources is audible. I've only seen that abstract and the discussion here but the explanation seems plausible enough to a layperson like me.

It doesn't sound like it rises to level of "something to worry about", but it's definitely interesting to note, and provocative enough to consider the ramifications of.
clepsydrae is offline   Reply With Quote
Old 10-30-2015, 04:27 PM   #115
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

Quote:
Originally Posted by clepsydrae View Post
Yeah -- any links for that?
No, sorry, just random bits of conversations.
Judders is offline   Reply With Quote
Old 11-03-2015, 11:06 AM   #116
Heb
Human being with feelings
 
Join Date: Aug 2010
Posts: 165
Default

This is what Paul Frindle, the man behind most of sonnox plugins has to say about hipassing.
https://youtu.be/73PbD6omOGQ
Heb is offline   Reply With Quote
Old 11-03-2015, 12:29 PM   #117
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by Heb View Post
This is what Paul Frindle, the man behind most of sonnox plugins has to say about hipassing.
https://youtu.be/73PbD6omOGQ
Thanks for the link -- he makes the point (raised above) that high passing can cause clipping with signals near 0dBFS.

He also makes a statement at 2:00 about how high-passing at 20Hz causes a "not subtle, eh?" change in the clipped 100Hz tone. To my ears it's not only subtle, it's almost inaudible, so i'm either half deaf or he's hearing something in his monitors that isn't making it through youtube.

I extracted the clean and HPF versions of the signal that happen just after 2:00, and did some ABX'ing. In 10 trials I got 80% accuracy with 95% confidence, so I think I can tell them apart, but it's really tough, and I have to switch directly from X to the two test files repeatedly and listen hard for the subtlest of changes.

Maybe it's my monitors, maybe it's my ears, maybe it's YouTube audio corruption, but from where I'm sitting it's very subtle. So I'm not taking this video as much of an argument against it, honestly, aside from the need to be careful with gainstage clipping.

You can test the files yourself using my ABX program (see sig). Files are here:
http://lacinato.com/pub/reaper/clean.wav
http://lacinato.com/pub/reaper/hpf.wav
...highest quality mp4 was downloaded from youtube, audio extracted to 16bit wav, short clips made and carefully looped a few times so the test files would be of a reasonable length for ABX'ing. No normalization or other gain changes made.
clepsydrae is offline   Reply With Quote
Old 11-03-2015, 12:51 PM   #118
Judders
Human being with feelings
 
Join Date: Aug 2014
Posts: 11,044
Default

I thought the 70Hz cut was very noticeable, but the 20Hz cut was subtle. Could well be a YouTube audio compression thing though.

I would have liked an example of more complex programme material, to see if that change in timbre is audible on the kind of stuff we might actually mix.
Judders is offline   Reply With Quote
Old 11-03-2015, 12:59 PM   #119
clepsydrae
Human being with feelings
 
clepsydrae's Avatar
 
Join Date: Nov 2011
Posts: 3,409
Default

Quote:
Originally Posted by Judders View Post
I thought the 70Hz cut was very noticeable, but the 20Hz cut was subtle. Could well be a YouTube audio compression thing though.
Agreed on all points, though if it was truly "not subtle" I'd expect to hear more than I did.

Quote:
I would have liked an example of more complex programme material, to see if that change in timbre is audible on the kind of stuff we might actually mix.
Also, his main point in the video seemed to be about the clipping, not the tonal effects of the 20Hz HPF.
clepsydrae is offline   Reply With Quote
Old 11-03-2015, 01:11 PM   #120
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,260
Default

I hope I can phrase this properly 'Significance' and 'playing it safe' are always pushing up against each other. Because of that each person needs to decide for themselves what is insignificant or significant and how they deal with it.

On one hand we can often deem such things insignificant and move on allowing us to free ourselves from being deadlocked concerning something that probably isn't worth worrying about. On the other hand we can follow certain practices aka err on the side of safety which allows us to ignore all of this because the practice protects us from the unexpected. Both have value and each individual needs to decide for themselves at which times they follow one or the other. IMHO we should never always follow either which is why being an engineer requires both a brain and experience.

I'm always faced with one of the two choices above but never always choose only one of them. None of this means I wouldn't HP if I felt I needed to because I would, but I don't do it without reason. YMMV, 2 Cents, IMHO and all that.
__________________
Music is what feelings sound like.

Last edited by karbomusic; 11-03-2015 at 02:27 PM.
karbomusic is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 01:06 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.