Ten billion profiles/captures = one amp

Re: the static capture/profile vs a modeler vs. a dynamic profile

  • Yes, that simply makes the capture as good as a top quality amp sim.

    Votes: 11 78.6%
  • No, capturing is superior to even the best amp modeling to date so that would be a Game Changer!

    Votes: 3 21.4%

  • Total voters
    14
Messages
604
If you have an amp with 10 knobs, no other switches, channels etc and you ‘capture’ every permutation of the knob settings you can capture the ‘whole’ amp.
Then you have a control ‘model’ that lets you adjust 10 virtual knobs to dial in the amp sound you like.
and you have some kind of machine learning to interpolate the missing data between the captured states of the amp.

Do you then simply have a top quality amp simulation? Do you have something better?
 
you have some kind of machine learning to interpolate the missing data between the captured states of the amp.
This is how Neural DSP does their “models” and plug-ins.
Do you then simply have a top quality amp simulation? Do you have something better?
You have something in-between a profiler and modeler.

The modeler will technically have an infinite number of knob combinations because it is modeling actual electronic components.

The interpolation method will have less and is an inferior technology (at the moment) IMO.
 
This is how Neural DSP does their “models” and plug-ins.

You have something in-between a profiler and modeler.

The modeler will technically have an infinite number of knob combinations because it is modeling actual electronic components.

The interpolation method will have less and is an inferior technology (at the moment) IMO.
I agree, it does seem to be the inferior technology for now. BUT what if what’s there is great for 99% of the people anyway? Also, I can’t believe I’m saying anything vaguely positive about NDSP. :D
 
The modeler will technically have an infinite number of knob combinations because it is modeling actual electronic components.

The interpolation method will have less and is an inferior technology (at the moment) IMO.
so if the choices were:
*it would still be inferior to a modeler
*the modeler is technically more accurate but we won’t hear the difference
*just buy a QC and go play your guitar Ringo
 
so if the choices were:
*it would still be inferior to a modeler
*the modeler is technically more accurate but we won’t hear the difference
*just buy a QC and go play your guitar Ringo
I hear the difference between the QC and Fractal and prefer Fractal personally. Some QC models sound great! Others sound quite bad. The Uber on the QC sounds like angry bees.

If I was choosing solely on amp models, it would be fractal no doubt. I wouldn't necessarily call the QC superior. But maybe NDSP just isn't taking advantage of the tech
 
If you try to sub divide the data between each of the 10 digits on the knob just a bit more before leaning on the inaccurate interpolation to fill in the gaps, so you capture knob positions 1, 1.25, 1.5, 1.75, 2, etc you go from 10 billion combinations to 729 Trillion combinations!

I’m wondering how accurate is our hearing since 729 trillion isn’t even a drop in the bucket of ‘infinity’.
At some point being better is just a pedantic circle jerk!
 
Word on the street is, this is exactly what NeuralDSP plugins are.
I don't buy it simply because of the logistics.

For example, let's consider a relatively simply amp with six knobs: Gain, Bass, Mid, Treble, Presence, Master Volume. Now let's say we use a fairly coarse resolution and sample each knob at only 10 positions (1, 2, 3, ..., 10).

That's 1M possible combinations. 1 million!!!

Now let's assume that this so-called machine learning can learn the response in a mere 120 seconds which is extremely optimistic. That means it will take 120M seconds to learn all possible combinations of the controls.

120M seconds = 2M minutes = 33,333 hours = 1,388 days = 3.8 years.

Add a Depth knob and the time increases to 38 years.
 
I don't buy it simply because of the logistics.

Was going to state the same: the math just don't add up. IIRC Doug Castro once mentioned that their approach to models/plugins is a mixture of component modeling and profiling/capturing.

I also have serious doubts about the Quad Cortex using any sort of neural networks for captures.
 
There are a variety of reasons I'm opposed to all this profiling/ML/AI/etc. stuff:

- To fully sample an amp takes years/decades so in practice you only get a handful of snapshots.

- The data is opaque. You can't edit the data as there are no parametric relationships. It's just a bunch of data with no insight into what any of it means.

- Fundamental understanding of how an amp works is lost. Someone can make/sell a product that has samples without any understanding of why tube amps sound the way they do. The more people lean on this technology the more this knowledge will be lost to time.

- You can't make virtual amps. You can't design an amp completely in the virtual domain. You can only sample what already exists. So you can't make a virtual amp that does things that real tube amps can't do (i.e. FAS Modern).

- Guitar tone will never evolve. If we relegate ourselves to simply copying existing products we'll never evolve beyond that. We should be asking why did tube amps become the gold standard of guitar tone? Why did solid-state never gain widespread acceptance? What is it about tube amps that is pleasing? What can we improve upon? I have spent almost two decades now trying to answer those questions and I have some theories.
 
Was going to state the same: the math just don't add up. IIRC Doug Castro once mentioned that their approach to models/plugins is a mixture of component modeling and profiling/capturing.

I also have serious doubts about the Quad Cortex using any sort of neural networks for captures.
Biting my tongue. Let's just say that this industry has more P.T. Barnums than it does P.T. Farnsworths.
 
I don't buy it simply because of the logistics.

For example, let's consider a relatively simply amp with six knobs: Gain, Bass, Mid, Treble, Presence, Master Volume. Now let's say we use a fairly coarse resolution and sample each knob at only 10 positions (1, 2, 3, ..., 10).

That's 1M possible combinations. 1 million!!!

Now let's assume that this so-called machine learning can learn the response in a mere 120 seconds which is extremely optimistic. That means it will take 120M seconds to learn all possible combinations of the controls.

120M seconds = 2M minutes = 33,333 hours = 1,388 days = 3.8 years.

Add a Depth knob and the time increases to 38 years.
I reckon it is more like they have a low gain, mid gain, high gain variation.... and they interpolate between them. But I really don't know.
 
I'm no tech graduate, obviously, but wouldn't it be possible to take 5-10 captures of one amp and then triangulate from there?
 
I'm no tech graduate, obviously, but wouldn't it be possible to take 5-10 captures of one amp and then triangulate from there?
For some amps, this may work ok, but for others, where the controls are truly interactive and/or very non-linear, your finished model probably still won't react in the same way that the real amp does.

Still better than a single capture though.
 
The Uber on the QC sounds like angry bees.
My mate said the same thing about the Uber model so he brought his QC to my studio and we compared the model to my real amp. Sounded and behaved basically the same. They’re just fussy amps to dial in and sometimes need weird settings. Same goes for the Rectifier and a few other models that get slated a bit, I think they just need to be dialled in carefully.

I think overall traditional modelling has a lot of advantages. There are so many permutations and interactions that can make a set of captures redundant as you are only capturing under a specific state. To do capturing well you have to have a good understanding of the analog circuits and behaviours still to know exactly what you’re capturing.

On top of the time needed to model an amp, how can you ensure valves and components are behaving the same the entire time. Are you checking wall voltages and bias? if something changes during the process, do you restart from the beginning?

it seems NDSP split their preamp and poweramp models which must cut a lot of permutations down. Guessing they have multiple machines working on it and probably several amps on the go at once.

Even still I think there are some trade off’s where machine learning simply doesn’t make sense - it takes more time and effort and is less useful in the long run. The ideal seems to be to isolate parts of the circuit where it makes most sense and choose the most appropriate tech for whatever you’re doing.

Despite my belief that if possible, algorithmic is better, sampling does have some merit too. You aren’t approximating the circuit, it IS the circuit. I generally prefer algorithm based synths over sample based ones, algorithmic reverbs over IR based ones and generally don’t really listen to much sample based music (although I do work a lot on it!). There has been a ton of cool stuff made from sampling and it has artistic and technical merit to explore. Even if I don’t always dig it, it has evolved and led to some cool things.

Isn’t Bogren digital also doing something similar to NDSP with their amp models? and some smaller company made a NAM based sim with a full control set
 
Back
Top