Ten billion profiles/captures = one amp

Re: the static capture/profile vs a modeler vs. a dynamic profile

  • Yes, that simply makes the capture as good as a top quality amp sim.

    Votes: 11 78.6%
  • No, capturing is superior to even the best amp modeling to date so that would be a Game Changer!

    Votes: 3 21.4%

  • Total voters
    14
200.gif
 
dynamic
/daɪˈnæmɪk/
adjective
1
[more dynamic; most dynamic]
a : always active or changinG

b : having or showing a lot of energy
2

technical : of or relating to energy, motion, or physical force
Right, I get the definition.
I think the manufacturers who have adopted using that term are suggesting using their IR is giving you, the user, the ability to be the 'energy that 'dynamically' alters the sound characteristics by giving you the controls of mic positions etc.
They propose having one IR, that is less static than the competitors, by virtue of their one is actually the interpolation of many IR's as you make the changes manually is the 'dynamic' component.

So in marketing speak:
Static is a photograph of someone playing a Marshall amp.
Dynamic is a video of someone playing a Marshall amp.

And in jaded consumer speak:
A real amp is Edward VanHalen on stage and me in the front row watching him unleash Eruption the first time.
Model an amp/speaker that can deliver that experience and I'm on the waitlist with deposit money ready to send.
 
I’ve captured simple 1-3 knob amps (Pro Jr, Tweed Champ, Tweed Deluxe) in NAM. At 20-100 settings, I didn’t approach the full number of possible settings.

While not every iteration was captured, I think I got close to capturing the experience of the amps.
 
Response by NAM designer?

 
Response by NAM designer?


This sounds interesting *I think* ? and I'm writing this as best as I can not being a KPA LP fan-boi :)

I read the link and it is unclear if (a) the new process requires manual knob moving (?) and/or (b) NAM "guessing" at different knob values (?) and/or (c) some combination of (a) and (b) and/or (d) some interpolation between multiple Captures (?)

I'm guessing (d)

I doubt its (a) as my rough calculations show the number of knob and capture combinations would be in the hundreds of thousands or more.

If its (b) NAM still faces the problems of guessing not just how 1 knob will realistically react but guessing how every know will interact with every other knob at every setting.

And even if its (d) you still run into the same problems as (a) but with even less raw-accuracy and more interpolation.

I *may* be missing something (?) - not for the first time.

Two things come to mind ..... (i) no matter how good your AI is, guessing is still guessing .... if it doesn't know exactly what each knob really does across its full range and how each knob interacts with each and every other knob across all of their full ranges ... then, yep, its still just highly refined guess-work..... and (ii) the simple genius of the KPA LP approach ... yes .. I know I'm biased :)

Why bother with getting AI to guess stuff ... when you can get a profile to know / match to a real Amp and EQ stack as part of the profiling and then embed that real amp modelling into the profile itself.

I would pre-guess that the likelihood of a Tonex/NAM approach getting the AI to guess accurately what should be happening is basically pretty much never [ for a long time] going to be better than the Capture/Profile process knowing what real-thing it is Capturing and how that real-thing responds and embedding that real model into the actual profile.

Just my 2c.

Ben
 
Last edited:
Yea his explanation was pretty clear describing the problem with reducing the points of reference that are used…he describes that as delivering bad results. Then says he has accomplished it by not having to capture those points of reference and explains that amazing feat…by saying the blog post isn’t big enough to describe it.
ok, I’ll take him at his word because I know fuckall about it but the first part of what he told me has me thinking the second part of what he told me is wrong…because he just told me it would be!
 
Yea his explanation was pretty clear describing the problem with reducing the points of reference that are used…he describes that as delivering bad results. Then says he has accomplished it by not having to capture those points of reference and explains that amazing feat…by saying the blog post isn’t big enough to describe it.
ok, I’ll take him at his word because I know fuckall about it but the first part of what he told me has me thinking the second part of what he told me is wrong…because he just told me it would be!

^^ .... and even less for me ! :) ^^

Yep ..... this post / blog is really weird ..... he correctly points out the problem .... then seems to say he has solved the problem by doing the very thing he is saying is the problem ..... not knowing anything here, I can only assume its some form of limited-multi-Capturing combined with AI-based interpolation between the Capture points ..... not to be rude .... but otherwise known as "complex guessing" ;)

Oh ... and the inclusion of the video seemed to have nothing to do with what he is claiming (?)

I *really* like Steve and his approach and attitude .... a lot ..... this post however shed no light on anything other than his pretty contradictory explanation / possible approach.

But as always ... lets see what comes and judge it on what it does and does not do.

Ben
 
Last edited:
Machine learning is not guessing.

To my understanding

If you have the actual real data for what a Gain Knob at 5 does .... and ..... the actual real data for what a Gain Knob at 10 does .... but no data for all the micro-points in-between ... you can only fill the unknown gap by using some sort of stimuli signal so the human / ML / AI process can "best estimate" what the response at those micro-points will be.

Now ... it might be bang-on ... or be totally sh%t ... but in lay terms, it is doing a very complex guessing-estimating process.

Ben
 
Response by NAM designer?

And stuff is getting real :cool:
 
in lay terms, it is doing a very complex guessing-estimating process.
Nope. Pattern recognition is not guessing, and you are over simplifying it. Machine learning and neural network concepts are no more focused on "guesses" than you are when you look at five things that are blue and one thing that is green, and you're able to tell the difference and cluster the objects together in your mind based on colour.
 
Nope. Pattern recognition is not guessing, and you are over simplifying it. Machine learning and neural network concepts are no more focused on "guesses" than you are when you look at five things that are blue and one thing that is green, and you're able to tell the difference and cluster the objects together in your mind based on colour.

I was no doubt over-simplifying it ...... but I *stubbornly* think I'm broadly right ....... pride before a fall and all that stuff :)

The difference in your analogy is that I *am* looking at 5 things that are blue and one that is green .. I recognise them ..... so my mind has some data to process to cluster the items in my mind however it wants to.

In my example above, the ML / AI has no idea what the data between Gain on 5 and Gain on 10 is ..... it has to be given some sort of broad-logic / roadmap / instructions / sonic stimuli etc..... in order to meaningfully process not just what is there, but in what order what is there, is.

If it is given absolutely -no data or instruction or stimuli of any kind regarding settings 6,7,8 and 9- ... the only possible / probable best case scenario is that it will base its "best guesses" for the data points at 6,7,8 and 9 by extrapolating what it knows about Gain on 5 and Gain on 10 to "fill in the blanks" at 6,7,8 and 9 ... this kind of training could result in perfection or chaos or totally bizzarro results.

Maybe my methodology explanation is totally wrong ;) (?) .... but I do have a degree in Theoretical Quantum Physics *

I have re-read Steve's post a few times now .... maybe in his mind and in his analysis he has "unscrambled the egg" ..... but there is nothing in his post that conveys that in any coherent or explanatory manner.

Ben

* - not
 
I know the NAMM product sounds good so if the guy who mastered that says he has had further progress I'm all in until my ears say otherwise but that explanation went of the rails right when I thought I was going to learn something.
 
Hmmmmm ....... interesting

In Steves Blog ... he refers to the Proteus Knob Capture process as "a way" of demonstrating / doing this .... I just went to their website and watched their whole Video they put up a year ago at about the same time Steve said he put this ability into NAM 0.3 - see video below.

They captured only 5 different Gain Knob settings ... nothing else ..... then got the ML / AI to interpolate them ........ long story short ...... the result is "pretty crap" comparing the real gain knob sounds at those 5 settings ... to the ML/AI results as is evidenced in their video ..... not to mention how it might "sound" in-between those settings ..... which he did not demonstrate.

I also read through the "solutions" Wiki Steve referred to in his post here - actually surprised as I followed the very broad gist of it more than I thought I would .... it requires a lot of statistical modelling, criteria setting and, by definition, lots of base raw data / statistical information that is as accurate as possible etc.....

To be crystal clear .... I am *not* saying this cant / wont be done really well at some time ..... and clearly Steve has the static capture stuff pretty much spot on

My
best guess as to how he will "do it" will involve some form of limited multi-knob-capturing and detailed ML/AI interpolation .... I cant see or think of any other way to give the ML/AI the data / information it needs to estimate/extrapolate the outcomes being sought.

And f.w.i.w .... based on my distant maths memory .... Gain, Bass, Mid, Treble, Presence, M-Vol = 6 knobs .... even with just 5 data points on each .... that works out to 7,776 data points / captures to accurately measure their real interactivity ........ do 10 increments of 1 over each of the 6 knobs and then its 60,466,176 data points / captures to accurately measure their real interactivity .... I love big numbers :)

As he points out himself and is also in the wiki link ...... less data points means more work for the ML/AI which leads to worse / less real outcomes.

But whatever approach he uses, or will use, it will hopefully be a "really lot better" than the approach he referred to in his post [and the related video ].

Could be interesting times in the next year or two or 15 :) ;)

Ben

 
Last edited:
I think that multiple knob position captures plus interpolation will become a little bit more common once the actual capturing process will become less time consuming - but I also think it'll sort of never be able to capture everything in case you're using more complexed nested gain- and tonestacks (unless you decide to spend weeks or months to create accurate captures). The way some tonestacks interact with eachother and the gain structure will just be too many variables, so you may have to recapture your treble knob depending on where you placed the gain or mid controls or whatever.

I think they should rather come up with something like Kempers Liquid Profiling and expand on it. So that you could possibly dive into the tone controls and change their frequency, bandwidth and individual location. Or so that you could even add more than just a single tone stack, allowing you to place one pre- one post-amp (with an option allowing you to decide which of the parameters you'd actually like to see in an idealized tone stack view).
 
Back
Top