Old 05-24-2021, 03:23 PM   #1
Jensus
Human being with feelings
 
Join Date: Apr 2021
Posts: 11
Default Ambisonic IR

Hi, I am trying to figure out how to create the best and easiest ambisonic impulse responses for software like ambi verb and wwise convolution.
I would like to
1. record IR for a specific place in the sphere wehere I will pan my monophonic sound source.
2. record IR to be used more freely across the whole sphere.
Should I for no. 1 place the speaker with the white noise/sine sweep at the position I want to place the virtual sound source and record it there?
For no. 2 record several IR around the location and combine them? Or fx recording it from a 2 m distance and use that for the entire sphere?
Does anyone have experience with this?
Jensus is offline   Reply With Quote
Old 05-25-2021, 09:55 AM   #2
Kewl
Human being with feelings
 
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 131
Default

If a "true stereo" IR reverb is four IRs (2*2 matrix), a "true 1st order" IR reverb would be 16 IRs (4*4 matrix). It's been on my "to experiment list" for the past ten years...

https://www.avosound.com/en/tutorial...ono-and-stereo
Kewl is offline   Reply With Quote
Old 05-25-2021, 11:02 AM   #3
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,063
Default

Quote:
Originally Posted by Kewl View Post
If a "true stereo" IR reverb is four IRs (2*2 matrix), a "true 1st order" IR reverb would be 16 IRs (4*4 matrix). It's been on my "to experiment list" for the past ten years...

https://www.avosound.com/en/tutorial...ono-and-stereo
Would it not simply be 16 independent IR processes for first order and not 16 different channels in the IR itself? Or wouldn't it actually be 12 since the W channel doesn't need to be convolved independently for each axis but rather as a matrix of the x/y/z signals? A stereo impulse is still only 2 channels, the true stereo part is to convolve the left channel and right channel independently through that stereo impulse. This is then mixed from the the now 4 channel result back down to stereo. I was just thinking, when I take an impulse with my first order mic I don't need to do it 4 times from 4 locations to get a true ambisonic IR. Perhaps that's what you meant all along though? I'm just trying to get it straight in my mind.

Anyways, for number 1 you do want to place the speaker at the location you want the emitter at and the mic in the spot you want the listener to be located. In my understanding, if you move that emitter around the listener it will be like rotating the entire room as you can't virtually move that speaker after the fact.

However...Zylia just did a very interesting demo of synthesizing multiple ambisonic recording positions in a room. I assume the same idea could be used to synthesize multiple ambisonic IRs, especially since they used wwise and unity to do it. That is my answer to question 2.

It is labor and processor intensive so make sure it's going to pay off in the end.
plush2 is offline   Reply With Quote
Old 05-27-2021, 03:03 AM   #4
Jensus
Human being with feelings
 
Join Date: Apr 2021
Posts: 11
Default

My main goal is to do a 360 video and audio recording and then ad sound sources with a reverb that is coherent with the space.
Knowing that I can make an ambisonic IR of the speaker placed at the right position of the panned mono source, must be my primary focus after getting into the complexity of this problem.
Ambi Verb and Wwise Convolution lets you import a 4 channel ambisonic IR recording. Regarding panning a sound source in the entire sphere it seems to be the course of action to use one IR and trust that the localization information from the direct sound source will overshadow the mismatch with the spatial reverb.
AudioEase is offering a set of ambisonic reverbs in their 360 pan suite, but I just thought it would be great to enhance the realism with the correct reverb information.
I have also heard of examples where you can re-render the IR using a decoder plugin and then do a soundfield rotation and process the next mono sound source at the new position. And that some are blurring the localization of the direct sound if it clashes with the reverb.
Is it possible at all to create a 16 channel IR file and use it with a 360 panner in any software today?

Last edited by Jensus; 05-27-2021 at 06:25 AM.
Jensus is offline   Reply With Quote
Old 05-27-2021, 05:16 AM   #5
jm duchenne
Human being with feelings
 
jm duchenne's Avatar
 
Join Date: Feb 2006
Location: France
Posts: 861
Default

How to create the 16 channels IR file I don't know, but to process it you can use X-MCFX Volver or the MConvolutionEZ from MeldaProduction, both free :
http://www.angelofarina.it/X-MCFX.htm
https://www.meldaproduction.com/MFreeFXBundle
jm duchenne is offline   Reply With Quote
Old 05-27-2021, 08:02 AM   #6
Kewl
Human being with feelings
 
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 131
Default

Quote:
Originally Posted by Jensus View Post
Is it possible at all to create a 16 channel IR file and use it with a 360 panner in any software today?
A bit of work, but, yes.

For the IR capture and processing, I would use Logic Pro's Impulse Response Utility with the "Quadraphonic" preset for an A-Format mic or "Quadraphonic B-Format encoded" preset for a B-Format microphone. Both have 16 IRs.

For the convolution with the IRs, I would probably use X-Volver Essential. Signal flow:

For A-Format microphone IRs:
B-Format -> Decode to position of emitters -> A-Format IRs convolution -> B-Format encoding

B-Format microphone IRs:
B-Format -> Decode to position of emitters -> B-Format IRs convolution
Kewl is offline   Reply With Quote
Old 05-27-2021, 06:43 PM   #7
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,063
Default

The problem you are going to continually run into in what you describe is that the sound source position of the IR determines the localization of the IR. The relationship between the impulse (speaker, acetylene balloon, clapper) and the ambisonic microphone is fixed when you record that IR. You can rotate it but you can't really move the speaker around after the fact so to speak. You could potentially use the information learned from the IR to synthesize the properties of the room and create a reverb model (using something like CATT) that would allow you to freely position sources in that room. It's an interesting question, is there an IR reverb that models in that fashion. Some of the shoebox options (like the one in IEM) offer something like this? Dear VR Pro allows for free position within certain hard-coded (not user created) spaces.

Easiest would be to get a good ambisonic IR of the space and then tweak that to simulate your different positions, or just make several IRs with the source in the likely places you'll want to put your instruments.

As an aside, I'm loving the melda MconvolutionEZ for its simplicity and non-crashiness (for me anyways) in comparison to xvolvler.

I use gratisvolver to create my impulses as I am not on a Mac.
plush2 is offline   Reply With Quote
Old 05-28-2021, 02:19 AM   #8
Jensus
Human being with feelings
 
Join Date: Apr 2021
Posts: 11
Default

Yes, it will take some trial and error to find a sufficient solution. Perhaps some situations can be handled with simple tools and others requires more precision. Great ideas on where to dive in.
I will primarily record outside for the soundscapes I am working with so it is also important it is relatively easy to handle in that situation.

Just an idea.
- I have seen people using the possibility of summing the IR's. This I guess would make the reverb less directional and cover the IR's of a larger area.
https://www.openair.hosted.york.ac.uk/?page_id=483

- In the Ambi Verb tutorial it says you should record 1 IR 2m from the emitter. Ex. Speaker on stage and microphone 2m from it in the audience. This could be a bit low resulotion imo.
The sweep file is only 7 sec, but otherwise I am thinking of sending a white noise signal from a recorder directly into the speaker and then pressing stop, to get a broadband IR signal.
https://www.noisemakers.fr/faq/#1524...-004522eb-722b

- If I record 4 IR's (n,s,e,w) in the horizontal plane around the ambisonic microphone at a 2m distance and sum them. The convolution would then be a panned mono signal with an ambisonic IR.

Would that give the reverb a rough directivity during a convolution in accordance to this logic https://www.avosound.com/en/tutorial...no-and-stereo?
Jensus is offline   Reply With Quote
Old 05-28-2021, 03:07 AM   #9
Kewl
Human being with feelings
 
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 131
Default

Quote:
Originally Posted by Jensus View Post
If I record 4 IR's (n,s,e,w) in the horizontal plane around the ambisonic microphone at a 2m distance and sum them. The convolution would then be a panned mono signal with an ambisonic IR.
If it's horizontal only, you can capture from three positions.
Kewl is offline   Reply With Quote
Old 05-28-2021, 05:40 AM   #10
Jensus
Human being with feelings
 
Join Date: Apr 2021
Posts: 11
Default

Ok, equally distributed from the front?
I will be looking into your description on Logic Pro's Impulse Response Utility Kewl.

FYI documentation on the Wwise convolution can be seen here:
https://www.audiokinetic.com/library...reverb_plug_in

and

https://www.audiokinetic.com/learn/videos/H50NRzZnd5k/
Jensus is offline   Reply With Quote
Old 05-28-2021, 10:37 AM   #11
Kewl
Human being with feelings
 
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 131
Default

Quote:
Originally Posted by Jensus View Post
Ok, equally distributed from the front?
I will be looking into your description on Logic Pro's Impulse Response Utility Kewl.
120° from one another. It could be 0, 120, 240 or 60, 180, 300.
Kewl is offline   Reply With Quote
Old 05-29-2021, 10:29 AM   #12
olilarkin
Human being with feelings
 
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,246
Default

some notes i just read in the sparta matrix convolver source code:


Example 1, spatial reverberation: if you have a B-Format/Ambisonic room impulse response (RIR), you may convolve it with a monophonic input signal and the output will exhibit (much of) the spatial characteristics of the measured room. Simply load this Ambisonic RIR into the plug-in and set the number of input channels to 1. You may then decode the resulting Ambisonic output to your loudspeaker array (e.g. using sparta_ambiDEC) or to headphones (e.g. using sparta_ambiBIN). However, please note that the limitations of lower-order Ambisonics for signals (namely, colouration and poor spatial accuracy) will also be present with lower-order Ambisonic RIRs; at least, when applied in this manner. Consider referring to Example 3, for a more spatially accurate method of reproducing the spatial characteristics of rooms captured as Ambisonic RIRs.

Example 3, more advanced spatial reverberation: if you have a monophonic recording and you wish to reproduce it as if it were in your favourite concert hall, first measure a B-Format/Ambisonic room impulse response (RIR) of the hall, and then convert this Ambisonic RIR to your loudspeaker set-up using HOSIRR. Then load the resulting rendered loudspeaker array RIR into the plug-in and set the number of input channels to 1. Note it is recommended to use HOSIRR (which is a parametric renderer), to convert your B-Format/Ambisonic IRs into arbitrary loudspeaker array IRs as the resulting convolved output will generally be more spatially accurate when compared to linear (non-parametric) Ambisonic decoding; as described by Example 1"
__________________
VirtualCZ | Endless Series | iPlug2 | Linkedin | Facebook
olilarkin is offline   Reply With Quote
Old 08-17-2021, 03:48 AM   #13
glenesis
Human being with feelings
 
glenesis's Avatar
 
Join Date: Feb 2012
Location: Long Island & Rochester, NY, USA
Posts: 3
Default

Quote:
Originally Posted by Jensus View Post
The sweep file is only 7 sec, but otherwise I am thinking of sending a white noise signal from a recorder directly into the speaker and then pressing stop, to get a broadband IR signal.
Hi! I'm loving this thread.
I have not tried this yet, but Wave Arts offers a completely free and cross-platform application for making IT files using white noise to capture a broadband response. I think its manual said the number of channels it can handle are only limited by your the channel count on your interface. They also co-created the excellent freeware true stereo Convology XT convolution reverb plug-in, which I do use.
Pick your platform from the popup. MIs Tool is the name of the IR capture app...
https://wavearts.com/downloads/

In addition, you'll find the manual for Apple's Impulse Response tool contains numerous graphical, tried-and-true suggestions for multi-speaker, multi-microphone, and single mic and speaker capture methods. I bet something in there can be adapted to your needs.

https://tinyurl.com/AppleIR

I hope this is if use. Stay well!

Cheers,
Glenn in Rochester, NY, USA
glenesis is offline   Reply With Quote
Old 11-24-2021, 03:54 AM   #14
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 602
Default

Quote:
Originally Posted by plush2 View Post
The problem you are going to continually run into in what you describe is that the sound source position of the IR determines the localization of the IR.
Another way to solve this is to control the transition of the original signal's early reflections into the reverb processor's late reflections.

The "O3A Reverb - Shaped Convolution" by Blue Ripple Sound takes this road, you can have a look at the documentation found at: https://www.blueripplesound.com/site...ide_v2.2.0.pdf

You can use that together with a shoebox early-reflection modeler like the "IEM RoomEncoder" or even Blue Ripple Sound's own "O3A Shoebox" processor.

For an early-reflections/late-reflections modeling solution based completely on the IEM suite, you can use the "IEM RoomEncoder" together with the "IEM FdnReverb".

Quote:
Originally Posted by plush2 View Post
As an aside, I'm loving the melda MconvolutionEZ.
I tried it using a mono signal encoded to HOA and then loaded a HOA IR I made for testing and it gives erroneous output. Actually whatever the position of the source in the ambisonic field and the behavior of the test IR, the plugin outputs the reflections located wrong.

How did you make it work?

I test the convolution algorithms' directivity using a test IR file that I created at the SoundFellas Immersive audio Labs. It's a rhombicuboctahedron impulse emission 1 pap per sector per second and you can find it here: https://1drv.ms/u/s!AoFZ1MP3ewRggqY3...X33nw?e=QUgMdF.

My tests also output the wrong directivity in ReaVerb when I move the mono source below the mid horizontal plane on the ambisonic encoder/panner.

If you get the correct results, please let me know how you did it because I think that something wrong goes on on the engine of this plugin that messes with the directivity.
__________________
Pan Athen
SoundFellas Immersive Audio Labs
www.soundfellas.com
Joystick is offline   Reply With Quote
Old 11-24-2021, 12:23 PM   #15
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,063
Default

Joystick, my friend, this is a mind bender as the pdf manual description states.

Quote:
Originally Posted by Joystick View Post
Another way to solve this is to control the transition of the original signal's early reflections into the reverb processor's late reflections.

The "O3A Reverb - Shaped Convolution" by Blue Ripple Sound takes this road, you can have a look at the documentation found at: https://www.blueripplesound.com/site...ide_v2.2.0.pdf
I'm still trying to conceptualize how this is actually being accomplished. Having conversed with Richard Furse in the past I have no doubt that it does. I will need to do some serious thinking and experimentation and will post back when I feel like I have an idea how this is being done.

Regarding MConvolutionEZ...

Quote:
Originally Posted by Joystick View Post
I tried it using a mono signal encoded to HOA and then loaded a HOA IR I made for testing and it gives erroneous output. Actually whatever the position of the source in the ambisonic field and the behavior of the test IR, the plugin outputs the reflections located wrong.

How did you make it work?
You almost gave me a heart attack. I did some acoustics consulting recently that relied fairly heavily on conclusions drawn using impulses convolved through MConvolutionEZ. Thank you for the link to your test setup. I can see what you mean by weird behaviour. I am not really set up at the moment to do HOA. I still use FOA for most of what I do as it winds up on internet streaming platforms and keeping the channel count down still works better there.

I tested using a pink noise generator, run into SoundParticles SpaceController which I set to stereo input and FOA for output. I then ran a 4 channel instance of MConvolutionEZ with an FOA ambix impulse I made myself from a tone sweep through a coresound tetramic using gratisvolver. Everything perceptually seemed to be coming from the correct locations.

I then loaded up a parallel track, fed it the same pink noise through the same panner but with Sparta multiconv. The perceptual results where the same and the two sources nulled perfectly. (to my great relief)

Perhaps the problem exhibits only in higher orders?
plush2 is offline   Reply With Quote
Old 11-26-2021, 10:21 PM   #16
Rodulf
Human being with feelings
 
Join Date: May 2019
Posts: 234
Default

I feel dumb, now. What I do is test out every possible combination of plugin and fiddle around with the settings until it sounds right.
Rodulf is offline   Reply With Quote
Old 02-03-2022, 02:53 AM   #17
DrWig
Human being with feelings
 
DrWig's Avatar
 
Join Date: Feb 2007
Posts: 41
Default

Just seen this thread. This is how I did it. As mentioned, a 4 x 4 matrix is needed for 1st order, and it gets very high, very quickly, channel count wise, for higher orders. It sounded amazing on our large 3D rig, setup for Sounds in Space.

https://youtu.be/KhuW6xQhf6M?t=46m12s

Slides also available on this page:
https://www.brucewiggins.co.uk/?page_id=881

cheers

Bruce
DrWig is offline   Reply With Quote
Old 02-14-2022, 05:00 AM   #18
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 602
Default

Quote:
Originally Posted by Rodulf View Post
I feel dumb, now. What I do is test out every possible combination of plugin and fiddle around with the settings until it sounds right.
It's not dumb, it's the creative process.

If you want to apply some extra engineering testing or a scientific experiment, you just take a break from that and do the other. ;-)

Here is my process...

When I discover something new that I want to add to my production process I usually take one week off to really dig into it. I start by searching online, usually Wikipedia and any other info that comes readily accessible. Then I have the keywords and phrases I need to help me search in journals and libraries like the Audio Engineering Society or The Acoustical Society of America, or even search for open text on PubMed, Google Scholar, and other portals.

In the beginning, I was scared by the vast amount of mathematics, physics, life sciences, and computer sciences matter that you find on those papers. But later on, I learned how to read them "diagonally" to get what I need.

From all this effort I can finally get if I need to, one or two books, which I then schedule to read within the month, while the info is vivid in my mind.

Then I know how to select technologies, plugins, tools, etc. So my questions in any plugin sales department are usually directed to the engineers, hehehe! :-)

That's how I do it, and it served me well the last 20 years or so :-P

Hope I gave you some insight.
__________________
Pan Athen
SoundFellas Immersive Audio Labs
www.soundfellas.com
Joystick is offline   Reply With Quote
Old 02-14-2022, 06:28 AM   #19
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 602
Default

Quote:
Originally Posted by DrWig View Post
Just seen this thread. This is how I did it. As mentioned, a 4 x 4 matrix is needed for 1st order, and it gets very high, very quickly, channel count wise, for higher orders. It sounded amazing on our large 3D rig, setup for Sounds in Space.
Hi Bruce,

What a nice presentation, thanks for sharing this, I really enjoyed watching it.

I approach IR production using a similar philosophy and it sounds great.

This is something that we are making for our Echotopia soundscape designer application and it sounds very realistic 3D-wise.
__________________
Pan Athen
SoundFellas Immersive Audio Labs
www.soundfellas.com
Joystick is offline   Reply With Quote
Old 06-14-2022, 06:35 AM   #20
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 602
Default

Quote:
Originally Posted by plush2 View Post
Regarding MConvolutionEZ... You almost gave me a heart attack. I did some acoustics consulting recently that relied fairly heavily on conclusions drawn using impulses convolved through MConvolutionEZ. Thank you for the link to your test setup. I can see what you mean by weird behaviour. I am not really set up at the moment to do HOA. I still use FOA for most of what I do as it winds up on internet streaming platforms and keeping the channel count down still works better there.

I tested using a pink noise generator, run into SoundParticles SpaceController which I set to stereo input and FOA for output. I then ran a 4 channel instance of MConvolutionEZ with an FOA ambix impulse I made myself from a tone sweep through a coresound tetramic using gratisvolver. Everything perceptually seemed to be coming from the correct locations.

I then loaded up a parallel track, fed it the same pink noise through the same panner but with Sparta multiconv. The perceptual results where the same and the two sources nulled perfectly. (to my great relief)

Perhaps the problem exhibits only in higher orders?
We did some more tests in my lab and I found erroneous results that I sent to MeldaProduction. There is definitely an issue when using HOA, probably can also produce erroneous results when using different numbers of inputs/outputs throughout the pipeline due to some extra convolution carried out by the plugins having to do with the philosophy being channel-based and not scene-based.

The only way to check this is to use the test files I posted here: https://forum.soundfellas.com/viewtopic.php?t=51, and in order to be sure, you have to check it for every different routing configuration you use with the convolvers.

The free convolved seems to produce wrong results and the paid convolver seems to produce correct results when input/plugin channels/output numbers are matching and there is a signal present in all channels declared throughout the chain, and the kernel topo is "Mono to Stereo" which is strange but it works.

I reported all my findings to MeldaProduction and waiting for their reply, or hopefully, a fix for both EZ and MB versions of their convolver.

Btw, their convolvers seem to be the best regarding performance and resource management as software applications. I have great results measuring the RT CPU load, so if they fix ambisonics behavior they will probably become my favorite tool for the job.

Reaper's own convolution plugin also handles HOA erroneously. I will post my findings here when I have concluded my research.
__________________
Pan Athen
SoundFellas Immersive Audio Labs
www.soundfellas.com
Joystick is offline   Reply With Quote
Old 06-15-2022, 08:53 PM   #21
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,063
Default

Quote:
Originally Posted by Joystick View Post
I reported all my findings to MeldaProduction and waiting for their reply, or hopefully, a fix for both EZ and MB versions of their convolver.

Btw, their convolvers seem to be the best regarding performance and resource management as software applications. I have great results measuring the RT CPU load, so if they fix ambisonics behavior they will probably become my favorite tool for the job.
I will keep my eye on the beta releases from Melda then.
plush2 is offline   Reply With Quote
Old 10-08-2022, 08:54 AM   #22
Sikblast
Human being with feelings
 
Join Date: Jul 2022
Posts: 4
Default

Quote:
Originally Posted by plush2 View Post
The problem you are going to continually run into in what you describe is that the sound source position of the IR determines the localization of the IR. The relationship between the impulse (speaker, acetylene balloon, clapper) and the ambisonic microphone is fixed when you record that IR. You can rotate it but you can't really move the speaker around after the fact so to speak. You could potentially use the information learned from the IR to synthesize the properties of the room and create a reverb model (using something like CATT) that would allow you to freely position sources in that room. It's an interesting question, is there an IR reverb that models in that fashion. Some of the shoebox options (like the one in IEM) offer something like this? Dear VR Pro allows for free position within certain hard-coded (not user created) spaces.

Easiest would be to get a good ambisonic IR of the space and then tweak that to simulate your different positions, or just make several IRs with the source in the likely places you'll want to put your instruments.

As an aside, I'm loving the melda MconvolutionEZ for its simplicity and non-crashiness (for me anyways) in comparison to xvolvler.

I use gratisvolver to create my impulses as I am not on a Mac.
I tried to use CATT GratisVolver but it won't deconvolve a useable IR on my side. Do you have any advices to use it correctly ?

I tried to use the "wet sweep deconvolve" mode with a 4 channel sweep (B-format) for the wet sweep and tried with mono, stereo, 4 channel sweep for the dry sweep, same lenght as the wet one.

The .wav it exports are not useable Impulse Responses
Sikblast is offline   Reply With Quote
Old 10-09-2022, 06:07 PM   #23
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,063
Default

What seems to be the problem with the impulse output? I just tried it and the result seems to work fine for me. It does need to be trimmed as the actual IR winds up in the middle (timewise) of an output file the same length as the dry source sweep and wet recorded sweep. The results are FUMA as opposed to ambix
plush2 is offline   Reply With Quote
Old 10-10-2022, 11:18 AM   #24
Sikblast
Human being with feelings
 
Join Date: Jul 2022
Posts: 4
Default

Quote:
Originally Posted by plush2 View Post
What seems to be the problem with the impulse output? I just tried it and the result seems to work fine for me. It does need to be trimmed as the actual IR winds up in the middle (timewise) of an output file the same length as the dry source sweep and wet recorded sweep. The results are FUMA as opposed to ambix
I don't get something I usually obtain with Reaverb (The wav IR is usually something like a clap/click). What I get with GratisVolver is something like a sweep playing in the middle (timewise) but there is no impulse at the beggining of the file.

I tried to deconvolve with the "Wet sweep deconvolve" mode with a 4 channel inverted dry sweep wav and a FUMA wet sweep (non-inverted) wav file. Am I doing something wrong ? The 2 files are the exact same time (10sec files).

Thank you for your help!
Sikblast is offline   Reply With Quote
Old 10-10-2022, 08:17 PM   #25
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,063
Default

Play the dry sweep file (Low to High) through the room/gear that you wish to get an impulse from. Record this ambisonically (FUMA).

Set GratisVolver to "Wet sweep deconvolve".

Put the Inv dry sweep file into GratisVolver as the "Inverted dry sweep WAV-file".

Load your recorded sweep into the "1,2 or 4-channel wet sweep WAV-file" dialog. Select to set your output file.

Now you should get the desired impulse (I hope).
plush2 is offline   Reply With Quote
Old 11-21-2022, 12:59 PM   #26
Sikblast
Human being with feelings
 
Join Date: Jul 2022
Posts: 4
Default

It is working great, thank you plush2
Sikblast is offline   Reply With Quote
Old Yesterday, 11:16 AM   #27
Voyage Audio
Human being with feelings
 
Voyage Audio's Avatar
 
Join Date: Feb 2020
Location: San Diego, CA
Posts: 19
Default

We are working on this article discussing how to use an ambisonic microphone to create spatial audio impulse responses from an acoustic space. Would love some feedback and of course, happy to answer any questions.
https://voyage.audio/impulse-respons...h-spatial-mic/
__________________
Download The Free Spatial Mic Reaper Session: https://voyage.audio/listen-to-spatial-mic/
Voyage Audio is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 11:46 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, vBulletin Solutions Inc.