Ten billion profiles/captures = one amp

Re: the static capture/profile vs a modeler vs. a dynamic profile

  • Yes, that simply makes the capture as good as a top quality amp sim.

    Votes: 11 78.6%
  • No, capturing is superior to even the best amp modeling to date so that would be a Game Changer!

    Votes: 3 21.4%

  • Total voters
    14
I can actually kind of get on board with that distinction. It falls in line with my understanding of the differences between the terms at least.
and where is border line for you? decision vs approximation or without sufficient vs incomplete?
 
Response by NAM designer?

So what pedal is this Perametric OD supposed to be capturing? What pedal has Drive, Tone, Volume knobs with Tight and Clipping toggle switches?

And the blog says he's had the knob-capturing capability for over a year. Why hasn't he shared it in all this time, and no open-source code sharing? Does he want to commercialize this aspect of development and license it to amp modeling companies or something?
 
So what pedal is this Perametric OD supposed to be capturing? What pedal has Drive, Tone, Volume knobs with Tight and Clipping toggle switches?

And the blog says he's had the knob-capturing capability for over a year. Why hasn't he shared it in all this time, and no open-source code sharing? Does he want to commercialize this aspect of development and license it to amp modeling companies or something?

Well ... that escalated pretty f%cking quickly ..... ;) :)

I must admit that when I read Steve's blog .... (a) it explained the problem (b) he was using or going to use the linked solutions methodology to overcome those problems and (c) he has the process down to an hour (?)

To my limited brain the 3 just don't gel ... to me, however it is described or "marketed" it can only mean some seriously major interpolation as is referenced in the wiki page he attached ... and he himself and the wiki page are very clear on the problems with this approach..

What I didn't post was that for him to claim he has had this process built in for over a year ... and never "promoted" it until now ... and its somehow an "open source" project ?!?!?!? .... that crossed me as at best, disingenuous and as having some sort of ulterior motive ... but I'm a cynical bastard ... maybe a hardware product is about to launch ?

And seriously ..... what was the point of the video he attached in his blog ..... it showed / proved nothing ..... a classic Looney Tunes episode would have been much more informative ;)

And putting the "guessing" issue aside - which is my preferred word - the solutions wiki he provided referred to the importance of the quality of the statistical modelling and criteria setting etc...... these two thing however don't exist unless you have good base raw data to build them.

Statistical modelling is by self-definition a [more involved] "guess".

ML / AI is [probably] awesome ...... but it cant know something it doesn't know ... it is not sentient or conscious or innately-learningly-observant from a totally blank and empty starting point ......it *must* have [very good] data, information, rules, criteria etc..... otherwise you hit go and get a 404 message ... or worse for us .... a gain and eq stack that sounds and responds nothing like a the real Amp it is "guessing" <- see what I did there :)

On a more serious but directly related note ... why is AI analysis of CT and MRI Scans so good and better than most human analysis ? ..... because it is told exactly and precisely the tissues densities, structures, hugely minimal aberrations and micro-tissue anomalies etc.... to look for ... things even most of the best trained humans cant reliably see / find .... it uses all this data to make the "best guess" - and a bloody good one at that ... but even then its far from perfect.

No embedded ML / AI data / rules / parameters / criteria etc.... = no guessing / zip / nada / nothing.

And like all good scientists ... and the knobs [no pun intended] like the Jordan Petersons of the world ..... they will never use an "uneducated" simple word or term like "guessing" when instead they can say "a highly calibrated iterative estimation based on known data path interactions" ie: higher level more complex guessing

Me ..... I like "guessing".

Oh ... and my obligatory comment .... why bother using ML/AI with limited real data to do this when you can model the real Amps Gain and EQ Stack to an extremely high level of precision and actual-reality, and merge that with the Profile being made .... you know .... like ...... oh well .... I'll leave that for another post

Ben
 
Last edited:
Response by NAM designer?


Nothing to do with this forum/thread if that's what you were wondering - just coincidental timing

And the blog says he's had the knob-capturing capability for over a year. Why hasn't he shared it in all this time, and no open-source code sharing? Does he want to commercialize this aspect of development and license it to amp modeling companies or something?

Some part of that framework was already in NAM - just removed this last commit actually. Literally not a single person I know of used it (or, used it successfully at least), as it definitely required some real ML knowledge to understand and apply. I don't know his process (nor could I share if I did), but it didn't fall into the category of the really simplified stuff for GUI / colab and typical end users, and added extra workload to support and regression tests to wrorry about.

Actually I think people tend not to realize just how deep this stuff gets because they're used to such basic user friendly implementations like tonex. There's much more complex math happening than "measure point A and B and interpolate with a standard function".

Someone in this thread asked, and yes - NAM models are deterministic
 
What I didn't post was that for him to claim he has had this process built in for over a year ... and never "promoted" it until now ... and its somehow an "open source" project ?!?!?!? .... that crossed me as at best, disingenuous and as having some sort of ulterior motive ... but I'm a cynical bastard ... maybe a hardware product is about to launch ?

No - to be clear - NAM as in the trainer and core out there now are open source, but that doesn't mean everything Steve does must also be open source. It's totally up to him if he chooses to open source or distribute this code at some point, but there's no obligation to do so just because NAM is. I don't see anything disingenuous about it. Previous commits with parametric pieces can still be pulled - anyone is free to try and learn to do it better or their own way and build their own integrations for it too provided they have the skillset.

I must admit that when I read Steve's blog .... (a) it explained the problem (b) he was using or going to use the linked solutions methodology to overcome those problems and (c) he has the process down to an hour (?)

And seriously ..... what was the point of the video he attached in his blog ..... it showed / proved nothing ..... a classic Looney Tunes episode would have been much more informative ;)

I don't think it was meant to prove anything - it was a demo of the plugin. Just try to remember this is one guy working on this stuff in his spare time - he's not a company and isn't selling anything. If the process works it works, and if not then that's fine too. For now, enjoy the free plugin I guess
 
Last edited:
Curious about a comparison ..... my understanding is that with Tonex .... the absolute highest quality Capture ... done and rendered with the fastest compatible GPU [and good CPU] still takes around ~8 to ~9 minutes - give or take

Is this roughly the same for the highest quality / 500 or 1000 Epoch NAM Capture and render ?

Ben
 
Curious about a comparison ..... my understanding is that with Tonex .... the absolute highest quality Capture ... done and rendered with the fastest compatible GPU [and good CPU] still takes around ~8 to ~9 minutes - give or take

Is this roughly the same for the highest quality / 500 or 1000 Epoch NAM Capture and render ?

Ben

Hehe - there's not a good way to answer this because quality depends both on model and epochs and there's really no "best" as you can configure it however you want. But the best model provided by the colab/gui is the standard model, and I can run around 50 epoch/minute on that with my 4070ti. I often run much longer though (1300), just because I can and the ESR will continue to drop to a point and it will not make the model heavier. Lite/feather weights are more comparable to tonex I think and they go quicker too
 
Curious about a comparison ..... my understanding is that with Tonex .... the absolute highest quality Capture ... done and rendered with the fastest compatible GPU [and good CPU] still takes around ~8 to ~9 minutes - give or take
Absolutely not. I have an AMD 5950X with an NVIDIA 4090, and ToneX still takes about 15-20minutes to train a capture.

My Blender renders were dramatically improved when I got the 4090. But my ToneX and NAM captures didn't benefit very much.
 
Absolutely not. I have an AMD 5950X with an NVIDIA 4090, and ToneX still takes about 15-20minutes to train a capture.

My Blender renders were dramatically improved when I got the 4090. But my ToneX and NAM captures didn't benefit very much.

Oh .... ok .... I got my 8/ 9 minute time from Karlis in the Amalgamaudio thread at TOP. I remember him posting that he had just spent a small fortune on a high-end PC and the fastest GPU Card he could find instock and he said he was "disappointed" that it was still taking 8 / 9 mins per best Tonex Capture ?!?!? I've never done one so cant speak with any authority ... so maybe he wasn't telling the full story ?

Ben
 
Oh .... ok .... I got my 8/ 9 minute time from Karlis in the Amalgamaudio thread at TOP. I remember him posting that he had just spent a small fortune on a high-end PC and the fastest GPU Card he could find instock and he said he was "disappointed" that it was still taking 8 / 9 mins per best Tonex Capture ?!?!? I've never done one so cant speak with any authority ... so maybe he wasn't telling the full story ?

Ben
I don't know, but I've never had anything like 8 minutes. I think 15minutes is the fastest I've had. I was quite disappointed when I spaffed a load of money for a top notch NVIDIA card (the 4090 seems to be the fastest card you can get from them, with the most CUDA cores) and it only took off about 8-10minutes from the 3060ti training times I had.

But I mostly bought it for my 3D work anyway, so..... c'est la fuck it.
 
Back
Top