Old 01-24-2020, 08:51 AM   #1
chip mcdonald
Human being with feelings
 
chip mcdonald's Avatar
 
Join Date: May 2006
Location: NA - North Augusta South Carolina
Posts: 3,928
Default Reaper and the A.I. Paradigm Shift

I think real A.I. is eventually going to blindside the audio software industry.

It can already do things I know some people don't believe is possible. The annoying thing is that I kind of follow some of the development with GANs and other aspects of the hyper-fast evolving field, and I know there are different groups scattered across the globe doing things with audio that could immediately change the world of music. They don't know enough about what we do to realize how relatively easy it would be to make an implementation or demo, and how useful it would be.


I'd like to write for the edification of the peanut gallery:

- I'm not talking about convolution impulse responses.
- I'm not talking about matching an e.q. curve on IIR/Fourier analysis.
- I'm not talking about pitch shifting.
- I'm not talking about iterative algorithm based midi-note "composition".


My dilettante's awareness of programming makes me think any of these groups could use Reaper right now as a development platform because of it's scripting capability, but having spoken to someone who is kind of a repository for the field again, they're not aware of our field.

I think at some point within 5 years we'll have (and I mean literally..):

1) a plugin that will alter the *analog* musical input to stylistically match anything. Play a bassline, and you get an output that has Paul McCartney, James Jamerson or Geddy Lee fills. Lead guitar with SRV vibrato, Van Halen legato. Vocals - Sinatra, Chris Cornell phrasing, Freddy Mercury or Jeff Buckley vibrato, etc..

You have your project laid out, and on each instrument you put an A.I. style-morpher on it: what comes out is a convincing polyglot of your choices.


2) Tonal/spectral plugin that transforms any input to any output.

Any analog vocal can be made to timbraly sound like Robert Plant, k.d. lang, Aretha Franklin. Any guitar can sound exactly like any recorded example perfectly. Any mix replicating any Famous Engineer's work from Any Famous Studio.

Not from simple match e.q. or convolution.

The above will make the existing plugin industry wither. You'll have people making ersatz recreations of Famous Music that are effectively identical, creating facsimiles out of gibberish. For the rest of us that are actually musicians it will be an amazingly liberating time, while simultaneously possibly destroying "the music business" as we know it.

As specialized processing gets it closer to real time, you'll have practice amps that will take any kind of nonsense input and make it sound like a reasonable clone of any sound AND style. It will not only fix intonation, but timing, dynamics and phrasing on the fly. A "band" can perform onstage and sound fantastically nothing like what the humans on stage are "performing".

It will change everything, and be over in a few years leaving "music" in a state we find painting pictures to be in now.




I think we'll see the first example of this in someone tinkering with a DAW in Linux, but maybe Reaper.

A scripted plugin that lets you choose a directory full of example .wav files for an output, trains itself on it and then yields an .ai transform preset. I think this could be done today by a number of a.i. researchers within Reaper relatively easy.


$.10
__________________
]]]>-guitar lessons - www.chipmcdonald.com-<[[[
Experiencing Guitar: Essays from Teaching by Chip McDonald https://www.amazon.com/dp/1521877823..._QZJxAbA4GVDC1
chip mcdonald is offline   Reply With Quote
Old 01-24-2020, 12:09 PM   #2
poetnprophet
Human being with feelings
 
poetnprophet's Avatar
 
Join Date: Jan 2018
Posts: 1,247
Default

That all may be inevitable, but the real question is: what are you going to do about it?

If everyone's fake stuff becomes real, are you going to jump on board?

Or stand up for the real musicians?

I think we can learn from the rise of deep fake videos right now, look at what facebook is doing to counter that, for example. It may be the start of a new "norm", but that doesn't make it acceptable.
__________________
https://www.kdubbproductions.com/
https://www.youtube.com/channel/UCpC...2dGA3qUWBKrXQQ
i7 8700k,4.9Ghz,Win10,Reaper 5,Motu 828es,MJE Hulk 990,GAP Pre73/EQ81
poetnprophet is offline   Reply With Quote
Old 01-24-2020, 01:12 PM   #3
Jorgen
Human being with feelings
 
Jorgen's Avatar
 
Join Date: Feb 2009
Location: Stockholm, Sweden
Posts: 5,298
Default

You may very well be right on all points. The creative side of it all would most probably come from somebody who uses the megafab tech in a wrong but innovative way. Even if music, and perhaps lyrics, more or less creates itself, creativity will find a way.
Jorgen is offline   Reply With Quote
Old 01-24-2020, 01:15 PM   #4
zeekat
Human being with feelings
 
zeekat's Avatar
 
Join Date: May 2009
Location: Polandia
Posts: 2,546
Default

Quote:
Originally Posted by chip mcdonald View Post
As specialized processing gets it closer to real time, you'll have practice amps that will take any kind of nonsense input and make it sound like a reasonable clone of any sound AND style. It will not only fix intonation, but timing, dynamics and phrasing on the fly. A "band" can perform onstage and sound fantastically nothing like what the humans on stage are "performing".
Seems like ol lip synching/miming does exactly that without any additional R&D budget. Perfectly acceptable for large part of the public too. Same with "studio" music making, you don't need any AI neural network spacebrains, just buy a legally available libraries of loops, drum patterns or chord progressions to fake proficiency and you're good. This ship has already sailed, I guess. Niche of weirdos peeking at what musicians fingers are doing is small but energetic tho, vide that neo-funk or tech metal scenes of today.
__________________
AM bient and rund funk
my soundclouds+youtubings
zeekat is offline   Reply With Quote
Old 01-24-2020, 06:36 PM   #5
DarrenH
Human being with feelings
 
Join Date: Mar 2014
Posts: 203
Default

Like every new tech, there will be a place for AI generated music. But, just like now, there's a lot of room for everything. We all tend to seek out the music we're interested in and there's no shortage of great new music.

Last edited by DarrenH; 01-26-2020 at 09:39 AM.
DarrenH is offline   Reply With Quote
Old 01-25-2020, 03:34 AM   #6
adXok
Human being with feelings
 
adXok's Avatar
 
Join Date: Jul 2006
Posts: 1,358
Default

chip mcdonald,
I think you are over-hyping the things AI could do.
Yes, it will definitely be used in statistics, finance, law and politics... those professions are gone (for good)! Even coding will be obliterated to e great extend.

For creative professions - not so much if anything at all.
I would love to "sing" something on the mic an through the speakers to hear as a result "AI Freddie Mercury" doing some crazy live performance.
Will it happen?! I highly doubt!

Just look at the games. AI is total BS in those. Another big subject.
Of course with upcoming 5G we won't see great improvement with AI in games (they will be mostly MMO) and I would like to be proven wrong! This is a different subject though.

AI, as with most technologies in computing, will be used mainly by government warmongers/politicians like CIA, NSA, KGB, MOSSAD, Scotland Yard, MI6, something chinese, something russian.

You know what - they already are using it!

And to be honest, Reaper and most other DAWs, already have more features than most of us will ever use, even without AI.
__________________
♦ YouTube → .: Pashkuli Keyboard :.
♦ Gmail → pashkuli.keyboard@gmail.com

Last edited by adXok; 01-25-2020 at 04:23 AM.
adXok is offline   Reply With Quote
Old 01-25-2020, 04:19 AM   #7
sai'ke
Human being with feelings
 
sai'ke's Avatar
 
Join Date: Aug 2009
Location: NL
Posts: 788
Default

I think you underestimate how cherry picked the samples are that people typically show.

While some of the advances are indeed exciting, as far as I know we're quite far from having actual controllable performances. As with everything, getting a few impressive cherry-picked samples is "easy" (actually extremely hard), but getting fine control of the process is even more difficult.

I do find some of the wavenet stuff pretty exciting, but if you've ever played with the code they've released, you start to realize that it's not nearly as "ready" as they make it seem. And that you need massive amounts of data to train in these networks. Google is in a bit of a special position here with the massive libraries (*cough* youtube) that they have access to. The problem with most generative NN stuff: you need massive amounts of (often curated) data to train these things in.
__________________
[Tracker Plugin: Thread|Github|Reapack] | [Routing Plugin: Thread|Reapack] | [Filther: Thread|Github|Reapack] | [More JSFX: Thread|Reapack]
sai'ke is online now   Reply With Quote
Old 01-25-2020, 04:29 AM   #8
Mordi
Human being with feelings
 
Mordi's Avatar
 
Join Date: May 2014
Location: Norway
Posts: 751
Default

Doesn't AI training need thousands of inputs to be able to create something realistic? There is a limit to how many tracks a band has produced, so the data might be too scarce to be able to produce something convincing.

I think AI-based stuff is pretty rad, and I hope to see someone do something groundbreaking with Audio.
__________________
Mordi is online now   Reply With Quote
Old 01-25-2020, 05:06 AM   #9
adXok
Human being with feelings
 
adXok's Avatar
 
Join Date: Jul 2006
Posts: 1,358
Default

Quote:
Originally Posted by Mordi View Post
Doesn't AI training need thousands of inputs to be able to create something realistic?
Indeed, that is why 5G is coming to the picture.
Not sure about the storage though. Maybe there is a solution with data storage plants with NVMe all over.

There is no bad tech. People are bad (having bad intentions). Sometimes without even realising it.
__________________
♦ YouTube → .: Pashkuli Keyboard :.
♦ Gmail → pashkuli.keyboard@gmail.com
adXok is offline   Reply With Quote
Old 01-25-2020, 07:42 AM   #10
martifingers
Human being with feelings
 
Join Date: May 2011
Posts: 1,892
Default

Another point is that AI depends on discerning patterns. Now this can work surprisingly well (listen to this: https://futurism.com/a-new-ai-can-wr...human-composer or possibly this https://www.youtube.com/watch?reload=9&v=1k_3sIP8XUU ). But real creativity is in breaking conventions or at least bending and combining them as you can see from Beethoven to the Beatles. I don't think we are anywhere near being able to program (is that even the right word?) the algorithm necessary to bring about revolutions in that sense.

AI is also pretty hopeless at sustaining creativity - try and read an AI novel for instance. Poems maybe but then the surrealists were producing equivalent cut up verses in the 1920s so no advance there.

As to emotionality again I am really sceptical - OK they may analyse an Aretha Franklin recording and track some dynamics etc. but applied across a whole album I would guess people would feel emotionally cheated. Unless of course the whole thing was edited and curated by a human being to find the most soulful takes.

I could be totally wrong - after all I was hugely sceptical that they could ever get a reliable autofocus sytem for photography!!!!!! (I am dreadfully old BTW>)
martifingers is offline   Reply With Quote
Old 01-26-2020, 03:20 PM   #11
fred garvin
Human being with feelings
 
Join Date: May 2018
Posts: 518
Default

Quote:
Originally Posted by chip mcdonald View Post
I think real A.I. is eventually going to blindside the audio software industry. ...

...1) a plugin that will alter the *analog* musical input to stylistically match anything. Play a bassline, and you get an output that has Paul McCartney, James Jamerson or Geddy Lee fills. Lead guitar with SRV vibrato, Van Halen legato. Vocals - Sinatra, Chris Cornell phrasing, Freddy Mercury or Jeff Buckley vibrato, etc..

You have your project laid out, and on each instrument you put an A.I. style-morpher on it: what comes out is a convincing polyglot of your choices.
...

2) Tonal/spectral plugin that transforms any input to any output.

Any analog vocal can be made to timbraly sound like Robert Plant, k.d. lang, Aretha Franklin. Any guitar can sound exactly like any recorded example perfectly. Any mix replicating any Famous Engineer's work from Any Famous Studio.
...

Already here innit? Not exactly like you propose but better/worse/easier already. BIAB already can do everything but write lyrics and sing for you. There are a host of VSTs for chord progression/melody creation. Melodyne etc. will change various aspects of your vocal track including the characteristics we associate with male/female voices or sing for you if rather unconvincingly. Amp and other equipment sims everywhere, sounding real good, in your choice of software or hardware. Toontrack etc. for drop in mix and instrument solutions. And if that's all too much, just go buy a sample pack in your chosen genre and hey, you're a "music producer". And yah, next year it'll all be more and better.

Most popular acts of the last decade? I'd guess Taylor Swift and Ed Sheeran. A guy and a girl with a guitar, playing songs they wrote themselves that have emotional content that people relate to. No button to push for that, don't think there will be soon if ever.

I think your ideas are actually exciting... for musicians. No one else knows the difference between for instance Lee and Jamerson or cares. But yah, good stuff, food for thought. I look forward to being able to make my vox sound like Cornell Mercury.
fred garvin is offline   Reply With Quote
Old 01-27-2020, 05:52 AM   #12
martifingers
Human being with feelings
 
Join Date: May 2011
Posts: 1,892
Default

Good point about Taylor Swift / Ed Sheeran and also the fact that good stuff gets lost. It truly does even (if my experience is typical) in genres where you think you are "current".

Here's another reference that just appeared today:
https://www.theguardian.com/commenti...on-write-books
martifingers is offline   Reply With Quote
Old 01-27-2020, 06:30 AM   #13
synkrotron
Human being with feelings
 
synkrotron's Avatar
 
Join Date: May 2015
Location: Warrington, UK
Posts: 1,122
Default

I could do with an A.I. track and album naming tool...
__________________
Bandcamp // YouTube // SoundCloud
synkrotron is online now   Reply With Quote
Old 02-13-2020, 03:01 AM   #14
dimentorium
Human being with feelings
 
Join Date: Jan 2020
Posts: 21
Default

I would not b to scared by the A.I. in music or looking to negative to it. Of course there will be a misuse and probably generated music to make money. At the end most popular musicians have their own patterns and if an algorithm can resynthesize this, than it will be done in order to make money.

But at the end there will also be enough people who want to listen to music composed from musicians who write something.

Overall from a developer point, I am starting to use Reaper in fun machine learning projects, but the approach is more from a musician who is bothered by some tasks I have to do.

Here is a link on a prototype of supportive AI/ML usage in Reaper.
https://forum.cockos.com/showthread....13#post2244413

I am working to automatically classify synthesizer presets, so the user can find promising sounds a bit faster. So focus on the creative and not the boring part. That is something in algorithm can do quite well ad which saves me time when looking for new sounds.

Last edited by dimentorium; 02-13-2020 at 03:26 AM.
dimentorium is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 02:11 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, vBulletin Solutions Inc.