Prev Previous Post   Next Post Next
Old 01-24-2020, 08:51 AM   #1
chip mcdonald
Human being with feelings
 
chip mcdonald's Avatar
 
Join Date: May 2006
Location: NA - North Augusta South Carolina
Posts: 4,294
Default Reaper and the A.I. Paradigm Shift

I think real A.I. is eventually going to blindside the audio software industry.

It can already do things I know some people don't believe is possible. The annoying thing is that I kind of follow some of the development with GANs and other aspects of the hyper-fast evolving field, and I know there are different groups scattered across the globe doing things with audio that could immediately change the world of music. They don't know enough about what we do to realize how relatively easy it would be to make an implementation or demo, and how useful it would be.


I'd like to write for the edification of the peanut gallery:

- I'm not talking about convolution impulse responses.
- I'm not talking about matching an e.q. curve on IIR/Fourier analysis.
- I'm not talking about pitch shifting.
- I'm not talking about iterative algorithm based midi-note "composition".


My dilettante's awareness of programming makes me think any of these groups could use Reaper right now as a development platform because of it's scripting capability, but having spoken to someone who is kind of a repository for the field again, they're not aware of our field.

I think at some point within 5 years we'll have (and I mean literally..):

1) a plugin that will alter the *analog* musical input to stylistically match anything. Play a bassline, and you get an output that has Paul McCartney, James Jamerson or Geddy Lee fills. Lead guitar with SRV vibrato, Van Halen legato. Vocals - Sinatra, Chris Cornell phrasing, Freddy Mercury or Jeff Buckley vibrato, etc..

You have your project laid out, and on each instrument you put an A.I. style-morpher on it: what comes out is a convincing polyglot of your choices.


2) Tonal/spectral plugin that transforms any input to any output.

Any analog vocal can be made to timbraly sound like Robert Plant, k.d. lang, Aretha Franklin. Any guitar can sound exactly like any recorded example perfectly. Any mix replicating any Famous Engineer's work from Any Famous Studio.

Not from simple match e.q. or convolution.

The above will make the existing plugin industry wither. You'll have people making ersatz recreations of Famous Music that are effectively identical, creating facsimiles out of gibberish. For the rest of us that are actually musicians it will be an amazingly liberating time, while simultaneously possibly destroying "the music business" as we know it.

As specialized processing gets it closer to real time, you'll have practice amps that will take any kind of nonsense input and make it sound like a reasonable clone of any sound AND style. It will not only fix intonation, but timing, dynamics and phrasing on the fly. A "band" can perform onstage and sound fantastically nothing like what the humans on stage are "performing".

It will change everything, and be over in a few years leaving "music" in a state we find painting pictures to be in now.




I think we'll see the first example of this in someone tinkering with a DAW in Linux, but maybe Reaper.

A scripted plugin that lets you choose a directory full of example .wav files for an output, trains itself on it and then yields an .ai transform preset. I think this could be done today by a number of a.i. researchers within Reaper relatively easy.


$.10
__________________
]]] guitar lessons - www.chipmcdonald.com [[[
WEAR A FRAKKING MASK!!!!
chip mcdonald is offline   Reply With Quote
 

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 11:36 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.