What's The Difference Between A "Tone Capture" And An "Amp Sim" ?

Incorrect. There is no "underlying models" in the NAM/ToneX process.
Perhaps you do not fully understand the relationship between physical processes and attempts to simulate (aka "model") those processes mathematically. Regardless of the type of core algorithms used in adaptive processes, those core algorithms are correctly characterized as models. Furthermore, the result of a successful adaptation using a computer is necessarily a collection of mathematical operations. IOW, a model.
NAM/ToneX take 2 audio pairs and create a Neural Network "model" from scratch,
"From scratch" is doing a whole lot more work in that sentence than you realize.
 
Last edited:
This looks like the reason NDSP amp sims sound so much better than IK Multimedia amp captures.

They're using Skynet® technology.

giphy.gif
 
Neural network based capture tech is a form of modelling when looked at from the comp-sci engineering and DSP perspective:

These things are definitely producing models. It’s a learned function approximation — a black-box system trained to replicate the input-output behavior of an amp or pedal. It is a model, in the truest mathematical and scientific sense.

Theoretically, It doesn't do it from scratch. It is going to be based on training data at some level. If there wasn't a ton of Marshalls inside the training data, then the model would produce bad Marshall models.

But it isn't the typically understood term of "modelling" when it comes to digital amplification. Most people who use the term "modelling" are referring to component modelling at the schematic level - ie; an amp model actually has a network of resistors that have been modelled, it actually has a model of a tube, it actually has a model of a power and an output transformer.

So there is the scientific versus laymen use of the word. Both are valid descriptors.

But it is really a distinction between functional modelling, versus structural modelling.
 
This looks like the reason NDSP amp sims sound so much better than IK Multimedia amp captures.
I doubt it.

It is primarily going to be down to the resolution and tuning and choice of the particular NN framework.

NAM uses wavenet.
I believe ToneX uses LSTM.

They aren't the same kind of process.
 
If it's all modeling, then it can't it be split into 2 basic paths of creation.
1.) Schematic simulators. I may be wrong, but I believe that Fractal/L6/etc is built around a proprietary version or similar circuit design software, like SPICE. A advanced circuit sim used in a lot of high tech industry for pre manufacturing design. Yields more flexibilty in customization, post creation.
2.) Capture/Profile(NDSP, Tonex, NAM) approach uses applied alogrithms to a inputed sound font(multiple freq rsponse sweeps, if you will) of a source(amp, cab, pedal, etc). Some are more AI driven than others. Need multiple capture/profiles to get create the full spectrum of sounds of a device that your"copying".

I have my asbestos boxers on, so flame away. I'm not an expert, I just like starting wars and watching the pedants show off their social skills from behind a screen.
 
This looks like the reason NDSP amp sims sound so much better than IK Multimedia amp captures.

They're using Skynet® technology.

giphy.gif
So there's a research paper from NeuralDSP explaining this thing. It's technical, but the basic idea is to record the training data for an amp at a randomized set of knob combinations. Because e.g 10 samples for each knob would quickly result in 10^number of knobs, instead they record a subset that is enough to cover the knob behavior.

Now NDSP knows enough of how the amp sounds, and how its controls create that sound. They can then feed that info to the neural network.

It's basically a more complex form of capturing that results in a model that instead of knowing one sound at one setting (like a single capture), it knows all permutations possible, and the control positions that make them.

The model still has no concept of what an amp is, it just knows that if it's configured like X, then input A should result in output B.

Correct me if I'm wrong.

But this has not lead to Neural DSP putting out more models, or overhauling existing ones other than a few Fender models. Unfortunately there seem to be also zero comparisons between the old vs new Fender models, as the plugin thing overshadowed these new models completely.
 
So there's a research paper from NeuralDSP explaining this thing. It's technical, but the basic idea is to record the training data for an amp at a randomized set of knob combinations. Because e.g 10 samples for each knob would quickly result in 10^number of knobs, instead they record a subset that is enough to cover the knob behavior.
Yeah, I mean that part was intuitive for me (and quite enlightening) watching that robot turn all the knobs on the amp. It starts to make sense how NDSP can get their amp sims to sound the way they do.. saturated, full, all the quirks and even some of the weak points of the real thing.

This method allows you to interact with the (simulated) amp controls and experience a very similar result to the real amp in it's feel and behavior.

This, versus a "capture", like what IK Multimedia is doing with the Tonex stuff where it sounds/feels like nothing more than some kind of EQ overlay that is simply acting like an audio filter that your guitar signal is running through.
 
Back
Top