View Single Post
Old 11-17-2018, 02:27 AM   #1
Human being with feelings
mschnell's Avatar
Join Date: Jun 2013
Location: Krefeld, Germany
Posts: 8,062
Exclamation Reaper for Live, on stage, and embedded use.

Reaper for Live, on stage, and embedded use.


Due to its stability, efficiency and versatility, Reaper is exceptionally well suited as the basis for an "embedded" application. This means that while at work, the Reaper GUI is not of any concern, and may even not be visible at all, but other control elements provided by some hardware or on a computer screen govern the proceedings that are executed by the Reaper project.

Several classes of such projects can be done in that way e.g.:
  • a "Live instrument" setup taking signals from instruments to be played by a musician and using plugins to generate or modify the sound, usually no pre-recorded songs involved.
  • "DJ" or "Live Looping" type of performances using pre-recorded songs and/or material that is recorded and played back on the fly.
  • "Live Mixing" with multi channel input and output, using plugins for sound modification. Mostly a Control Surface device is added for decent usability of the system.
  • "deeply embedded" applications. Here Reaper is used as a slave application, controlled by external software that creates the visible functionality, while relying on Reaper for audio processing in the background.
Of course combinations of these classes can be required in certain situations.

With "Live instrument" setups, the most common applications are:Each of those require dedicated concepts and might be done in in different ways and using different add-on tools to be instantiated within Reaper.

"DJ" or "Live Looping" seems to subsume a wide range of applications requiring individually tuned workflow. To allow for "background tracks" this sometimes might be combined with an "instrument setup".

With "Live Mixing", Reaper would replace the audio processing of a digital mixer. This might be as well for stage performances as in Studio situations, reusing the hardware and software available for media production. Here mostly it makes sense to install remote A/D-D/A converters connected via digital cables such as AES50 or Dante. An advantage over using a hardware mixer is that a huge number of audio plugins are available for sound processing, and even can be created as Reaper JSFX plugins.

Regarding "deeply embedded", a huge range of applications are conceivable. Some examples known to be working might be mentioned here:
  • a "Theater Software" system controlling light, sound and other effects in combination with a Reaper instance responsible for the audio part such as mixing microphones and firing sound clips.
  • an automatic "Song Contest" system that records performances on a stage, started and stopped by just pushing a button, and when stopped immediately automatically provides a rough mix of the performance on a web page in the Internet.
  • analyzing audio streams captured in realtime by dedicated sound processing JSFXes that provide some non-audio output via Midi or abusing an audio channel for control signals, and with this data triggering software that handles some "machinery".


As most of the applications mentioned here need decently low latency performance, this is more demanding then a typical "DAW" use of Reaper's regarding the hardware and software setup required. So some comments on this issue.

Latency (delay between some input to the system and the resulting output) is not automatically introduced by insufficient power of the computer. Latency is set by the configuration of the project. If you set the latency too small, this will result in audio dropouts and crackles. To prevent this, you need to increase the processing latency (usually defined by the block count and size the audio driver introduces), reduce the CPU demand by engaging fewer or more efficient plugins, or use a more powerful computer.

Regarding "Live Instrument" or "Live Mixing" projects, the latency needs to be low enough to be automatically compensated by the brain. A good analogy is the time a sound wave needs to travel from the loudspeaker to the ear (with about 333 m/sec). The sound latency always adds to the latency introduced by the computer system.

Besides the latency that needs to be deliberately defined to allow the system to run in a perfectly stable way, some minimum latency is introduced by the audio A/D-D/A hardware and it's drivers. To build a Live system, audio equipment needs to be used, that is specified for such purpose and features appropriately low inherent latency. Unfortunately many makers don't bother to publish such specification. Hence it might be useful to do a "round-trip-delay" measurement before any purchase decision.

Usually the average CPU workload is not much of a problem, but crackles and dropouts are the result of peak demand only occurring now and then. Not only the demand of the audio system itself needs to be considered, but other stuff, the computer might be busy with, eats CPU cycles. That is why when building a Live system, any available "realtime tweaks" for the OS should be applied, e.g. preventing the start of any unnecessary services that are enable for the OS by default.

Happily, the CPU demand of Reaper itself is known to be especially low, making Reaper a good choice for Live applications. Reaper assigns an OS thread to each track. Hence multi-core CPUs are exploited with projects that feature multiple tracks.

Reaper denotes the individual "PDC" latency of all plugins instantiated, and uses that value to determine the resulting latency of the complete project (detecting the longest path and compensating the others). Here you might want to select plugins with lower latency over others with similar functionality. With some plugins, the PDC can be selected by parameters. Usually lower PDC for the same functionality comes at the cost of more CPU demand. A good example is ReaVerb: lower PDC and higher CPU demand with smaller FFT Window setting; PDC=0 and highest CPU demand with "ZL" (zero latency) activated. Hence for Live usage of ReaVerb, "ZL" should be activated for zero PDC, and "LL" should be activated as this allows the plugin to use multiple CPU threads and with that distribute the load to multiple CPUs.

As WiFi technically is neither very well suited for realtime purpose nor for streaming data, and the stability of a WiFi can be affected by other WiFi devices in the surroundings, it is not recommended to use WiFi for streaming audio or Midi in an "on stage" setup.

Mac, PC or Linux ? Regarding the software infrastructure of the Life setup, it does not seem to make much difference if same is based on OSX or on Windows, as long it features enough CPU, RAM and Disk resources. A decent quality PC system, might be slightly less expensive, while a Mac might feel a bit more safe. The best price performance ratio and best safety will be achievable with a Linux box, but setting up such a system might be much more demanding (see the "Linux" subforum for details).

.. continued in next post ... Please answer in a new thread, as I might need to add more pages to this thread.

Last edited by mschnell; 04-28-2019 at 01:09 AM.
mschnell is offline   Reply With Quote