Old 04-05-2022, 11:46 AM   #241
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 625
Default

Quote:
Originally Posted by Dodecahedron View Post
hello!
I don't know if this idea of an independent working group is still on the table, but if it is and you are still looking for people to join, I would be very happy to contribute what I can to such a group.
Of course it's on the table, I'm also gathering people to support open standards and open-minded standards, heh :-P

I'm "Panos Kouvelis#2201" on Discord or PM me in this forum to give you details. Cheers!
__________________
Pan Athen
SoundFellas Immersive Audio Labs
www.soundfellas.com
Joystick is offline   Reply With Quote
Old 04-05-2022, 04:00 PM   #242
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

[QUOTE=junh1024;2543900]
# REAPER ADM proposal 3.4


Very interesting!
A couple of questions:
One of the biggest problems with OBA (atleast for music, movies and in broadcasting, it's different for things like sound installations) is, that, even though there is a somewhat universally accepted interchange format (ADM), different renderers by different vendors may interpret the same metadata in a different way. This is why the EBU has created the EAR. One may argue that this doesn't solve the problem, as the compatibility issues still remain of course, but at least there is now a reference renderer for ADM to check against, and, as far as I understood, it is the envisioned workflow by the EBU that content creators mix their projects through this renderer. If I understood your proposal correctly, an instance of RSP on the master would act as a monitoring plugin, which would mean that we'd be adding yet another renderer to the equation. This might not be a problem in some situations, but if I were to mix a project in that environment I would probably be a bit nervous about not being able to hear the mix through an "official" renderer. Is there a way to either directly implement EAR (it's open source, after all), or at least interface with an external renderer, maybe over OSC, or by conversion to SADM? There is currently no implementation of EAR that would allow us to do that (the Python-one won't cut it), but that could be worked on.
Regarding the squashing of objects: I'm not entirely sure what the status quo of this is, but there has been some work on standardizing this for different profile conversions. I remember reading a paper by someone at the IRT some time ago, that dealt with creating mathematical models for finding perceptually optimiced solutions for squashing any ADM file down to whatever is required for a given target format. I'd have to look for that paper again, it was almost certainly in German, though, but I remember that some of the things the author proposed were pretty radical. In any case, it might be a good idea to talk to the people who are developing and standardizing the ADM, as there will, at some point, almost certainly be standardized conversion paths from EBU production profile to Dolby profile, MPEG-H profiles of varying complexities etc.

Last edited by Dodecahedron; 04-05-2022 at 04:06 PM.
Dodecahedron is offline   Reply With Quote
Old 04-05-2022, 09:53 PM   #243
junh1024
Human being with feelings
 
Join Date: Feb 2014
Posts: 240
Default

Quote:
Originally Posted by Dodecahedron View Post

One of the biggest problems with OBA (atleast for music, movies and in broadcasting, it's different for things like sound installations) is, that, even though there is a somewhat universally accepted interchange format (ADM), different renderers by different vendors may interpret the same metadata in a different way. This is why the EBU has created the EAR. One may argue that this doesn't solve the problem, as the compatibility issues still remain of course, but at least there is now a reference renderer for ADM to check against, and, as far as I understood, it is the envisioned workflow by the EBU that content creators mix their projects through this renderer. If I understood your proposal correctly, an instance of RSP on the master would act as a monitoring plugin, which would mean that we'd be adding yet another renderer to the equation. This might not be a problem in some situations, but if I were to mix a project in that environment I would probably be a bit nervous about not being able to hear the mix through an "official" renderer. Is there a way to either directly implement EAR (it's open source, after all), or at least interface with an external renderer, maybe over OSC, or by conversion to SADM? There is currently no implementation of EAR that would allow us to do that (the Python-one won't cut it), but that could be worked on.
I think it would be best to leave RSP as the renderer since it has some unique features like being able to place 32 speakers absolutely anywhere, with variable contribution.

Quote:
Originally Posted by Dodecahedron View Post

Regarding the squashing of objects: I'm not entirely sure what the status quo of this is, but there has been some work on standardizing this for different profile conversions. I remember reading a paper by someone at the IRT some time ago, that dealt with creating mathematical models for finding perceptually optimiced solutions for squashing any ADM file down to whatever is required for a given target format. I'd have to look for that paper again, it was almost certainly in German, though, but I remember that some of the things the author proposed were pretty radical. In any case, it might be a good idea to talk to the people who are developing and standardizing the ADM, as there will, at some point, almost certainly be standardized conversion paths from EBU production profile to Dolby profile, MPEG-H profiles of varying complexities etc.
You can get away with a lot with perceptual squashing, even rendering to a fixed 916, but I assume ADM will be edited further, hence my gentle approach to squashing. You can link the paper if you like.

RSP uses a cubular paradigm, and EAR accepts cubular, which will be converted to circular, but last time I tried it wasn't great IMO, so i could file an issue later i guess.
__________________
REAPER 2D/3D Surround suite: Instructions & Download | Discussion | Donate
junh1024 is offline   Reply With Quote
Old 04-06-2022, 03:26 AM   #244
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 625
Default

Quote:
Originally Posted by Dodecahedron View Post
I'd have to look for that paper again, it was almost certainly in German.
Who writes a paper only in German? Hehehe.

Can you find the paper? Sound interesting. I have some people who know German and could explain it to me.
__________________
Pan Athen
SoundFellas Immersive Audio Labs
www.soundfellas.com
Joystick is offline   Reply With Quote
Old 04-06-2022, 03:37 AM   #245
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by junh1024 View Post
I think it would be best to leave RSP as the renderer since it has some unique features like being able to place 32 speakers absolutely anywhere, with variable contribution.


You can get away with a lot with perceptual squashing, even rendering to a fixed 916, but I assume ADM will be edited further, hence my gentle approach to squashing. You can link the paper if you like.

RSP uses a cubular paradigm, and EAR accepts cubular, which will be converted to circular, but last time I tried it wasn't great IMO, so i could file an issue later i guess.
OK, fair enough. But still it might be a good idea to have at least the option of interfacing with an external renderer. Or should all the people who need to QC with an "official" renderer just produce their stuff with EPS, or whatever plugins may come along in the future?

Regarding squashing: yes, that might be a job for an external application anyway. My point was just that we should keep in mind that there will hopefully be standardized profile conversions, and we shouldn't have to invent the same thing twice, unless we want to have the option to deal with some custom workflow.
Yes, I know, the coordinate conversion is not ideal. Filing an issue might be a good idea, although it has been talked about in the context of another issue, so the developers should already know. In any case, this whole thing kind of breaks the EBU's recommendations anyway, as the broadcast production profile supports both coordinate systems (so does EAR, of course), and they recommend to convert coordinates only when you absolutely have to, and avoid converting twice, as these conversions are never lossless. So maybe it would be a good idea if EPS supported both coordinate systems in the first place.
Dodecahedron is offline   Reply With Quote
Old 04-07-2022, 12:29 PM   #246
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
Who writes a paper only in German? Hehehe.

Can you find the paper? Sound interesting. I have some people who know German and could explain it to me.
Just found it again:
https://curdt.home.hdm-stuttgart.de/PDF/Zimmermann.pdf
It's a master thesis by Michael Zimmermann called "Untersuchung zur Optimierung der automatisierten Anpassung und Konvertierung von NGA Inhalten", which translates to something like "study on the optimisation of automatic adjustment and conversion of NGA content". There is an English abstract. I'm just now scrolling through the pages again, and what it seems to boil down to is this: he first uses VBAP to pre-render objects to a specific target speaker setup, which introduces a couple of errors, for instance a localisation error. Then he weighs that against psychoacoustic criteria to get a (perhaps) more meaningful, perceptually weighted localisation error. There were no listening tests conducted here, and I would personally expect there to be some surprises, as no psychoacoustic model is ever perfect, but it's still interesting.
Dodecahedron is offline   Reply With Quote
Old 04-12-2022, 07:50 AM   #247
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

Quote:
Originally Posted by BartR View Post
Ok although Dolby said that Ear ADM has nothing to do with ATMOS. Right when I thought there was a solution.
As you mention, it's not currently possible to get ADM media generated by the EAR Production Suite in to some other tools due to the different ADM profiles they use. Interoperability between tools and ADM profiles is a focus of the EBU (https://tech.ebu.ch/audio) so hopefully this situation should improve.
The EAR Production Suite is quite lenient with the ADM it takes in - it will try to import it regardless of which profile it is authored to. However, it exports conforming to the EBU Production Profile (https://tech.ebu.ch/docs/tech/tech3392.pdf). Therefore for the time being, any down-chain tools would also need to support the production profile in order to use EAR Production Suite exports as the situation currently stands.
matt_f is offline   Reply With Quote
Old 04-12-2022, 07:50 AM   #248
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

Quote:
Originally Posted by junh1024 View Post
A new track is created, with the ADM on it, which sends beds & objects to new component tracks. New mono wavs are NOT created on disk (unlike EAR import).
(Rationale: This is fastest & most efficient.)
This is what you would expect but actually I've found that for large channel-count assets (e.g, 40+ channels, which wouldn't be uncommon for ADM media) it is far more performant to write out the audio for individual objects out to file (and use those individual files for takes on tracks), than it is to put the original high-channel-count file in a take on every track (and just use the channels you need from it). This is why the EAR Production Suite uses the former method. It could be that the poorer performance with the latter method is due to lots of low-level seek operations when multiple tracks are using the same asset, since this would become most apparent when it's a high-channel-count asset and there are lots of simultaneous instances of it. Perhaps it can be optimised in REAPER itself.
matt_f is offline   Reply With Quote
Old 04-12-2022, 07:51 AM   #249
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

Quote:
Originally Posted by Dodecahedron View Post
One of the biggest problems with OBA (atleast for music, movies and in broadcasting, it's different for things like sound installations) is, that, even though there is a somewhat universally accepted interchange format (ADM), different renderers by different vendors may interpret the same metadata in a different way. This is why the EBU has created the EAR.
I just wanted to reiterate your comment as it makes a very good and important point. To ensure we have an accurate representation of user experience when authoring ADM, we need to make sure our output is standards compliant (ITU-R BS.2127 in this case) which includes monitoring. The EAR provides this (https://github.com/ebu/libear) and hence why the EAR Production Suite was built around it. I'd certainly encourage any other ADM authoring solutions to use EAR so that we can maintain consistent experience throughout production and through to consumption, regardless of pipeline/tools.
matt_f is offline   Reply With Quote
Old 04-12-2022, 07:54 AM   #250
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

Quote:
Originally Posted by junh1024 View Post
RSP uses a cubular paradigm, and EAR accepts cubular, which will be converted to circular, but last time I tried it wasn't great IMO, so i could file an issue later i guess.
The coordinate conversion provided by the EAR Production Suite is there primarily to support ADM conforming to the Dolby profile. The conversion from that coordinate space to spherical isn't a simple cartesian-to-spherical conversion, because the Dolby coordinate space isn't a simple cube. This might be why the conversion doesn't look to have created a perfect representation in spherical coordinates - it likely is correct, but looks a little unexpected due to the unusual coordinate conversion.
matt_f is offline   Reply With Quote
Old 04-14-2022, 06:37 AM   #251
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by matt_f View Post
As you mention, it's not currently possible to get ADM media generated by the EAR Production Suite in to some other tools due to the different ADM profiles they use. Interoperability between tools and ADM profiles is a focus of the EBU (https://tech.ebu.ch/audio) so hopefully this situation should improve.
The EAR Production Suite is quite lenient with the ADM it takes in - it will try to import it regardless of which profile it is authored to. However, it exports conforming to the EBU Production Profile (https://tech.ebu.ch/docs/tech/tech3392.pdf). Therefore for the time being, any down-chain tools would also need to support the production profile in order to use EAR Production Suite exports as the situation currently stands.
That's very interesting. The site you linked to mentions a "squeezer". It sounds to me like that would be the thing that reduces ADM metadata to fit with different profiles, right? Would that be completely automatic, or still require some user interaction?
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 07:05 AM   #252
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by matt_f View Post
I just wanted to reiterate your comment as it makes a very good and important point. To ensure we have an accurate representation of user experience when authoring ADM, we need to make sure our output is standards compliant (ITU-R BS.2127 in this case) which includes monitoring. The EAR provides this (https://github.com/ebu/libear) and hence why the EAR Production Suite was built around it. I'd certainly encourage any other ADM authoring solutions to use EAR so that we can maintain consistent experience throughout production and through to consumption, regardless of pipeline/tools.
Thank you! I have a question about the EAR workflow, though: even if everyone implemented EAR as their renderer for production, we would still have to deliver our final production in a consumer format like Atmos or Mpeg-h, so at some point we would still be dealing with different renderers. How would we deal with this from a mixing point of view? Could we just do whatever we want within the capabilities of EAR and expect whatever tool is used to convert to a vendor specific profile to make adjustments to the metadata, so that perceptual differences ideally become negligible?
Dodecahedron is offline   Reply With Quote
Old 04-18-2022, 04:23 AM   #253
krabbencutter
Human being with feelings
 
Join Date: Feb 2012
Posts: 29
Default

Whatever a future Reaper x Spatial Audio implementation might look like, I really hope it can do panned sends for objects. Last time I tried EAR this wasn't possible afaik. I could fake it by sending the mono object to another track and then use ReasurroundPan to send it to the multichannel reverb with the correct positioning. But this would only work on static objects and needed a lot of unnecessary routing. And because ReaSurroundPan does not have an automatable "rotation" parameter, I was not able to link the EAR panner and ReaSurroundPan together. Another workaround was to render out multichannel stems for the object groups and then send those to the reverb, which had the drawback of not being realtime.
krabbencutter is offline   Reply With Quote
Old 04-18-2022, 04:56 AM   #254
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by krabbencutter View Post
Whatever a future Reaper x Spatial Audio implementation might look like, I really hope it can do panned sends for objects. Last time I tried EAR this wasn't possible afaik. I could fake it by sending the mono object to another track and then use ReasurroundPan to send it to the multichannel reverb with the correct positioning. But this would only work on static objects and needed a lot of unnecessary routing. And because ReaSurroundPan does not have an automatable "rotation" parameter, I was not able to link the EAR panner and ReaSurroundPan together. Another workaround was to render out multichannel stems for the object groups and then send those to the reverb, which had the drawback of not being realtime.
I agree! Being able to have some sort of linking between plugins, so that we can communicate object positions to multichannel reverbs would be great. I don't know, how hard it would be to implement this, but there is a protocol for the transmission of ADM metadata over OSC:
https://github.com/immersive-audio-live/ADM-OSC

Although, tbh, this whole issue of objects + reverb goes a lot deeper than that. Most surround reverbs, even if they respond to the panning of the input, only support standard surround and 3d configurations. When routing them to a bed track and playing them back on a different surround setup (for instance a 7.x.4 reverb on a 5.x.4 system), the reverb would fold down, with the rears and the sides being combined, whereas the renderer would still pan the object, unless it is locked to the nearest speaker.
Dodecahedron is offline   Reply With Quote
Old 04-19-2022, 08:15 AM   #255
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

On the reverb stuff, it's a tricky one. I don't think panned sends are the solution though (not in the long-term anyway). With panning, you essentially bake-in the positional data, in which case you might as well bake the object in to the multichannel reverb output too.

Quote:
Originally Posted by Dodecahedron View Post
That's very interesting. The site you linked to mentions a "squeezer". It sounds to me like that would be the thing that reduces ADM metadata to fit with different profiles, right? Would that be completely automatic, or still require some user interaction?
I think it has the potential to be, perhaps through a combination of different transformations to mould ADM to fit a particular profile. It's only recently been added to their workplan so I imagine it would be something fairly manual initially. I think ultimately we're going to need something that can just sit in the background in a media production pipeline and just get it done without any manual intervention though but that might be some way away.

Quote:
Originally Posted by Dodecahedron View Post
Thank you! I have a question about the EAR workflow, though: even if everyone implemented EAR as their renderer for production, we would still have to deliver our final production in a consumer format like Atmos or Mpeg-h, so at some point we would still be dealing with different renderers. How would we deal with this from a mixing point of view? Could we just do whatever we want within the capabilities of EAR and expect whatever tool is used to convert to a vendor specific profile to make adjustments to the metadata, so that perceptual differences ideally become negligible?
That's a good point. In short, yes. The ideal situation is one in which we can author ADM and hear it through a single reference renderer and assume conversion to any consumer format would very closely resemble that. It would be undesirable to rely on anything codec-specific as that affects the reusability of ADM - your ADM production becomes codec-specific because you can only guarantee the end-user experience with that particular setup. With ADM we should be able to create a single production which we can chuck through any delivery/emission pipeline without worrying about the end-user experience - it should be just as we intended during production, and the only way we can ensure that without having to QC every output is if the conversion processes aim to produce an output which closely resembles reference rendering.
matt_f is offline   Reply With Quote
Old 04-19-2022, 06:38 PM   #256
potscrubber
Human being with feelings
 
Join Date: Nov 2008
Posts: 19
Default

As I was a contributor to this thread at one point, just back to report I've purchased Nuendo to get going with Dolby Atmos* for now. I will keep an eye on Reaper and hope it at least develops external renderer capability at some point.

*I do know the field is bigger than Dolby Atmos
potscrubber is offline   Reply With Quote
Old 04-21-2022, 02:15 AM   #257
jm duchenne
Human being with feelings
 
jm duchenne's Avatar
 
Join Date: Feb 2006
Location: France
Posts: 914
Default

About reverb integration in OBA, with support to both MPEG-H and Dolby Atmos, I think that you could find this very interesting:
https://www.youtube.com/watch?v=N0jdlafmgyQ
https://fiedler-audio.com/spacelab-interstellar/

I am actually testing the tryout version, and it is impressive...
__________________
Acousmodules: multichannel / spatial audio plugins http://acousmodules.free.fr
jm duchenne is offline   Reply With Quote
Old 04-21-2022, 01:25 PM   #258
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by jm duchenne View Post
About reverb integration in OBA, with support to both MPEG-H and Dolby Atmos, I think that you could find this very interesting:
https://www.youtube.com/watch?v=N0jdlafmgyQ
https://fiedler-audio.com/spacelab-interstellar/

I am actually testing the tryout version, and it is impressive...
Thanks for sharing! This is indeed interesting.
What kind of setup are you testing it on? It seems like it can output an absolutely insane number of channels.
Dodecahedron is offline   Reply With Quote
Old 04-21-2022, 01:43 PM   #259
cyrano
Human being with feelings
 
cyrano's Avatar
 
Join Date: Jun 2011
Location: Belgium
Posts: 5,246
Default

Thx, JM. Somehow I missed that plugin and it might be everything I'm looking for. Price (600€) is a bit steep, but if it's everything I need, I'll bite the bullet.
__________________
In a time of deceit telling the truth is a revolutionary act.
George Orwell
cyrano is offline   Reply With Quote
Old 04-22-2022, 05:08 AM   #260
jm duchenne
Human being with feelings
 
jm duchenne's Avatar
 
Join Date: Feb 2006
Location: France
Posts: 914
Default

Quote:
Originally Posted by Dodecahedron View Post
Thanks for sharing! This is indeed interesting. What kind of setup are you testing it on? It seems like it can output an absolutely insane number of channels.
I am actually testing it with a 52 channels setup, and except for some channel mismatch with the VST3 version it works great.
And the Thomas Fiedler is very responsive which adds another good pont to its quality software.
Alas, even if it is justified, it is too expensive for me, so I will only enjoy it during the 14 days trial ;-)
__________________
Acousmodules: multichannel / spatial audio plugins http://acousmodules.free.fr
jm duchenne is offline   Reply With Quote
Old 06-06-2022, 01:07 AM   #261
BartR
Human being with feelings
 
BartR's Avatar
 
Join Date: Oct 2014
Location: Belgium
Posts: 1,612
Default

Atmos is now on cars. Mercedes and Volvo started to commercialize this system.Yes it's arguable in which way they can optimize such environment but it's a fact.

I can't wait the moment in which Reaper's Panner will be compatible with the Dolby Renderer. I mena, the moment in which exporting the project, the appropriate WAV is generated together with the required metadata for the Renderer.

In this moment the only alternative is:
to track with Reaper
to import in DaVinci Studio
to export the required Dolby Files form DaVinci Studio

... and we can upload it to Market.

But to work in Fairlight is cumbersome (when compared to Reaper). I would prefer to do everything within Reaper and then just finalize with Dolby Renderer OR just importing in DAVinci to encode it ...
__________________
Reaper: always the most up-to-date.
O.S.: Windows 11 Pro
ReaPack (with bilingual Tutorials): https://bit.ly/ReaPack_Repository
BartR is offline   Reply With Quote
Old 06-06-2022, 04:35 AM   #262
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 625
Default

Quote:
Originally Posted by BartR View Post
But to work in Fairlight is cumbersome (when compared to Reaper).
I agree. I tried DaVinci but their audio system is very basic compared to audio workstations, which is normal as it is a video editing application. For video editing, it's great though.

It also lacks any support to easily work with ambisonics which I use in my productions.

So, at the moment, for any clients that need Atmos I use Nuendo to compose the Atmos masters.
__________________
Pan Athen
SoundFellas Immersive Audio Labs
www.soundfellas.com
Joystick is offline   Reply With Quote
Old 06-06-2022, 04:30 PM   #263
sguyader
Human being with feelings
 
Join Date: Dec 2020
Posts: 175
Default

For those on MacOS, have you tried this?
https://www.youtube.com/channel/UCsh...0mNYF6hgLeEz-A

Patrice Lazareff seems to be mixing regularly in Dolby Atmos with Reaper, sending Reaper's audio through Dolby Audio Bridge into Dolby Atmos Renderer, with success.
sguyader is online now   Reply With Quote
Old 06-07-2022, 01:01 AM   #264
BartR
Human being with feelings
 
BartR's Avatar
 
Join Date: Oct 2014
Location: Belgium
Posts: 1,612
Default

Quote:
Originally Posted by sguyader View Post
For those on MacOS, have you tried this?
https://www.youtube.com/channel/UCsh...0mNYF6hgLeEz-A

Patrice Lazareff seems to be mixing regularly in Dolby Atmos with Reaper, sending Reaper's audio through Dolby Audio Bridge into Dolby Atmos Renderer, with success.
I know that for who has Mac, it's not an issue (I know the channel I'm subscribed to). It' possible and it works. But still quite cumbersome.

WHo's totally fucked-up, are the ones with Windows.
__________________
Reaper: always the most up-to-date.
O.S.: Windows 11 Pro
ReaPack (with bilingual Tutorials): https://bit.ly/ReaPack_Repository
BartR is offline   Reply With Quote
Old 06-07-2022, 10:28 AM   #265
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
Default

Quote:
Originally Posted by BartR View Post
WHo's totally fucked-up, are the ones with Windows.
Truly, when it comes to Dolby this has always been the case.

If support comes it will come long after the new technology can be effectively leveraged for profit.
plush2 is online now   Reply With Quote
Old 06-07-2022, 04:31 PM   #266
sguyader
Human being with feelings
 
Join Date: Dec 2020
Posts: 175
Default

Well that's a pity for Windows users.
sguyader is online now   Reply With Quote
Old 06-07-2022, 10:04 PM   #267
BartR
Human being with feelings
 
BartR's Avatar
 
Join Date: Oct 2014
Location: Belgium
Posts: 1,612
Default

Quote:
Originally Posted by sguyader View Post
Well that's a pity for Windows users.
The only one way is:

With Reaper (when really ready) to export the required format that can be digested by the Dolby Renderer.
Get the Dolby Renderer (for Mac it costs around 300 euro, for windows 995), importing the required file-set, then export the encoded required for Digital Market.
__________________
Reaper: always the most up-to-date.
O.S.: Windows 11 Pro
ReaPack (with bilingual Tutorials): https://bit.ly/ReaPack_Repository
BartR is offline   Reply With Quote
Old 06-24-2022, 01:43 AM   #268
ednolbed
Human being with feelings
 
Join Date: Nov 2021
Posts: 7
Default RSP single instance

You can use RSP on either a Master bus, a folder track or a bus, as a single instance.

Via routing, you can route any track or folder to an object in the RSP. You can fully define an object, you can automate everything.

That you don't need an instance of RSP on every track, you just need one, on the bus that receives everything.

Wouldn't that be the way to go in terms of object-based mixing in Reaper (including Atmos?)

All Reaper should be able to then is to write to an ADM file, right? Just like DPP has been implemented a couple years back.

I have made many spatial mixes using either ReaSurround or ReaSurroundPan in this way, as a single instance that receives all the tracks via routing.
ednolbed is offline   Reply With Quote
Old 06-27-2022, 01:27 AM   #269
BartR
Human being with feelings
 
BartR's Avatar
 
Join Date: Oct 2014
Location: Belgium
Posts: 1,612
Default

Quote:
Originally Posted by ednolbed View Post
You can use RSP on either a Master bus, a folder track or a bus, as a single instance.

Via routing, you can route any track or folder to an object in the RSP. You can fully define an object, you can automate everything.

That you don't need an instance of RSP on every track, you just need one, on the bus that receives everything.

Wouldn't that be the way to go in terms of object-based mixing in Reaper (including Atmos?)

All Reaper should be able to then is to write to an ADM file, right? Just like DPP has been implemented a couple years back.

I have made many spatial mixes using either ReaSurround or ReaSurroundPan in this way, as a single instance that receives all the tracks via routing.
the missing part are MetaData that should be generated to be read from the renderer. And they must be compatible with it, or it's useless.
I hoped that the current native RSP could have this ... but still not-yet ...
__________________
Reaper: always the most up-to-date.
O.S.: Windows 11 Pro
ReaPack (with bilingual Tutorials): https://bit.ly/ReaPack_Repository

Last edited by BartR; 06-27-2022 at 11:04 AM.
BartR is offline   Reply With Quote
Old 06-29-2022, 12:59 AM   #270
musicbynumbers
Human being with feelings
 
musicbynumbers's Avatar
 
Join Date: Jun 2009
Location: South, UK
Posts: 14,214
Default

As another person with a dolby atmos 7.1.4 setup, I'm looking forward to this! although I'm happy with a bed for music, the annoying issue of the bed being less channels the 12 without objects would be good to have full support for getting around this and obviously for cinema work too.

Subscribed!
__________________
subproject FRs click here
note: don't search for my pseudonym on the web. The "musicbynumbers" you find is not me or the name I use for my own music.
musicbynumbers is offline   Reply With Quote
Old 07-02-2022, 03:02 PM   #271
BartR
Human being with feelings
 
BartR's Avatar
 
Join Date: Oct 2014
Location: Belgium
Posts: 1,612
Default

Quote:
Originally Posted by musicbynumbers View Post
As another person with a dolby atmos 7.1.4 setup, I'm looking forward to this! although I'm happy with a bed for music, the annoying issue of the bed being less channels the 12 without objects would be good to have full support for getting around this and obviously for cinema work too.

Subscribed!
Bed system is not object based, so I wouldn't consider it real Atmos. It's more like an extended 7.1. It's not scalable.

Atmos is object based as consequence: it's scalable and in any speakers-configuration tries to represent as best as possible the location of the object into space. This is reached by metadata that are the object, together with the sound.
__________________
Reaper: always the most up-to-date.
O.S.: Windows 11 Pro
ReaPack (with bilingual Tutorials): https://bit.ly/ReaPack_Repository
BartR is offline   Reply With Quote
Old 07-02-2022, 03:44 PM   #272
musicbynumbers
Human being with feelings
 
musicbynumbers's Avatar
 
Join Date: Jun 2009
Location: South, UK
Posts: 14,214
Default

Quote:
Originally Posted by BartR View Post
Bed system is not object based, so I wouldn't consider it real Atmos. It's more like an extended 7.1. It's not scalable.

Atmos is object based as consequence: it's scalable and in any speakers-configuration tries to represent as best as possible the location of the object into space. This is reached by metadata that are the object, together with the sound.
Indeed I'm aware of that.

I'd like full support so I don't have to be limited to only 2 overhead channels for just doing my own music in atmos. Not so much for object panning.

In my day job, fmod and wwise handles any atmos stuff I do for work but if I ever have the willpower to do linear media again, sure would be nice to have full atmos support (although I'm still not a fan of objects and the amount limit on them, unless that's changed).
__________________
subproject FRs click here
note: don't search for my pseudonym on the web. The "musicbynumbers" you find is not me or the name I use for my own music.
musicbynumbers is offline   Reply With Quote
Old 07-02-2022, 07:47 PM   #273
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
Default

Quote:
Originally Posted by BartR View Post
Bed system is not object based, so I wouldn't consider it real Atmos. It's more like an extended 7.1. It's not scalable.

Atmos is object based as consequence: it's scalable and in any speakers-configuration tries to represent as best as possible the location of the object into space. This is reached by metadata that are the object, together with the sound.
Quote:
Originally Posted by musicbynumbers View Post
Indeed I'm aware of that.

I'd like full support so I don't have to be limited to only 2 overhead channels for just doing my own music in atmos. Not so much for object panning.

In my day job, fmod and wwise handles any atmos stuff I do for work but if I ever have the willpower to do linear media again, sure would be nice to have full atmos support (although I'm still not a fan of objects and the amount limit on them, unless that's changed).
I'm a little confused, I understood that the bed component was still an essential component in creating an ATMOS mix. Is the intention to abandon their X.1 bed configurations and move toward doing entire mixes using objects only?
plush2 is online now   Reply With Quote
Old 07-03-2022, 02:10 AM   #274
BartR
Human being with feelings
 
BartR's Avatar
 
Join Date: Oct 2014
Location: Belgium
Posts: 1,612
Default

Quote:
Originally Posted by plush2 View Post
I'm a little confused, I understood that the bed component was still an essential component in creating an ATMOS mix. Is the intention to abandon their X.1 bed configurations and move toward doing entire mixes using objects only?
the Bed component is essential to keep compatibility with the other no-Atmos system (as Dolby statements from their own documentation I studied years ago).

But Atmos itself, is a scalable solution. And the bed is not scalable at all. While the objects they are. That's why they were created.

Do they must be used every time in any situation? It depends from the project.

It's certain that: if your goal is to keep the scalability AND to rebuild the sound-stage positioning the instruments where they should be (like you do in any other Ambisonics ... since Atmos is a kind-of Ambisonics as well, in a certain way to speak), then Atmos has to be used. Wna with this I mean: objects.

They don't serve only for the classical "swash" in the movies, sometimes replicated in some song. They enables you to rebuild the sound-stage giving deepness and height, scalable up and down.
__________________
Reaper: always the most up-to-date.
O.S.: Windows 11 Pro
ReaPack (with bilingual Tutorials): https://bit.ly/ReaPack_Repository

Last edited by BartR; 07-03-2022 at 09:04 AM.
BartR is offline   Reply With Quote
Old 07-04-2022, 07:55 AM   #275
plush2
Human being with feelings
 
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
Default

Thanks BartR, I appreciate the answer. More toys (the ceiling speakers and objects) is always a good thing when story telling is the goal. I'm not certain it would hold together super well as an immersive array but that is not really the goal, at least in the theatre context.
plush2 is online now   Reply With Quote
Old 07-06-2022, 05:53 AM   #276
ednolbed
Human being with feelings
 
Join Date: Nov 2021
Posts: 7
Default

Quote:
Originally Posted by BartR View Post
the missing part are MetaData that should be generated to be read from the renderer. And they must be compatible with it, or it's useless.
I hoped that the current native RSP could have this ... but still not-yet ...
I guess it's not that simple as converting the automation data into the appropriate metadata? As far as I understand, the metadata is mainly spatial position and channel-agnostic.

I also heard that LAcoustics provides (or is working on) a platform-agnostic standard for object-based audio. A standard, so to speak... That would be something!
ednolbed is offline   Reply With Quote
Old 07-06-2022, 07:34 AM   #277
BartR
Human being with feelings
 
BartR's Avatar
 
Join Date: Oct 2014
Location: Belgium
Posts: 1,612
Default

Quote:
Originally Posted by ednolbed View Post
I guess it's not that simple as converting the automation data into the appropriate metadata? As far as I understand, the metadata is mainly spatial position and channel-agnostic.

I also heard that LAcoustics provides (or is working on) a platform-agnostic standard for object-based audio. A standard, so to speak... That would be something!
I never meant it's easy task. Simply: it's mandatory to have it.
LAcoustic have L-ISA Studio which could be great but it's only for apple.
__________________
Reaper: always the most up-to-date.
O.S.: Windows 11 Pro
ReaPack (with bilingual Tutorials): https://bit.ly/ReaPack_Repository
BartR is offline   Reply With Quote
Old 07-28-2022, 03:34 AM   #278
MP SRIKAR
Human being with feelings
 
MP SRIKAR's Avatar
 
Join Date: Aug 2020
Location: India
Posts: 2
Default Still how many days are expected to wait to finally have Dolby Atmos in Reaper?

Still how many days are expected to wait to finally have Dolby Atmos in Reaper?

I have to use Dante Via Virtual Audio cable and pipe the Hardware output of Reaper as 9.1.6 in Davinci Resolve studio 18 for Dolby Atmos monitoring and transcoding. This is a static workflow, the dynamic pans from RSP will work better if they are native Dolby Atmos objects, and also being restricted to 16 channels of Dante is not great. I don't have Mac book pro unfortunately to connect DAPS with the reaper. I see other DAWs are quickly getting Dolby Atmos with also Binaural monitoring features off, near, mid & far but of course, I love reaper and want to stick with it. So please give me some good news for a Windows 10 PC user

Last edited by MP SRIKAR; 07-28-2022 at 04:00 AM.
MP SRIKAR is offline   Reply With Quote
Old 08-09-2022, 11:24 AM   #279
CreativeNorthMedia
Human being with feelings
 
Join Date: Jul 2022
Posts: 14
Default Came here to voice support and confusion

I'm confused, the "native integration" just means natively generating LTC right? Which can be generated with a plugin for DAWs that don't natively integrate, is that what "bridged mode" means?

https://learning.dolby.com/hc/en-us/...Atmos-Renderer
CreativeNorthMedia is offline   Reply With Quote
Old 08-10-2022, 10:26 PM   #280
TobyAM
Human being with feelings
 
Join Date: Feb 2017
Location: Hollywood, CA
Posts: 125
Default

Quote:
Originally Posted by MP SRIKAR View Post
Still how many days are expected to wait to finally have Dolby Atmos in Reaper?
That's a neat trick! I'm in the same boat. Every day I like Reaper more, but it makes it increasingly sad I can't mix Atmos objects with it.
TobyAM is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 07:40 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.