NAM: Neural Amp Modeler

Sounds good.

Assuming A is the NAM I feel the difference is similar to what I feel from the studio captures but less so.




Has NAM improved with full rig captures?
NAM only had an issue with full-rig captures for about a few months after it blew up. Steve came up with a fix shortly after (the fit_cab flag which is now on by default in the trainer and even on Tone3000).
 
Oh, yeah - definitely easy to get confused by all them decimals; (happened to me as well way too many times to keep track of lol.
Ha yeah I got so used to the decimals being in the same kind of ballpark. I usually accept around 0.1 for high gain tones, if its higher its usually a bit different sounding, but for a lot of gain it can be hard to get much lower.


The xSTD will probably get you a hair under 0.01 and if you set the "gated" flag to "true" in either one of those layers in the xSTD it should bump it down even lower but come with a slightly increased CPU cost at runtime.

Nice! Testing it now, I'll see how they turn out.


Sounds good.

Assuming A is the NAM I feel the difference is similar to what I feel from the studio captures but less so.




Has NAM improved with full rig captures?
Yeah, B just sounds more satisfying and better to me although it's quite a small difference. Been a while since I fired up ToneX but I might as well include an advanced model to see how close it gets to these.
 
I have one model training with a "gated" parameter set to true, but Im trying to do another now and when I change any of them I get this message:

Too many parameters. Please simplify your architecture.

Is that because of what I have training at the moment? Seems strange that I could do it a few minutes ago but not now.
 
I have one model training with a "gated" parameter set to true, but Im trying to do another now and when I change any of them I get this message:



Is that because of what I have training at the moment? Seems strange that I could do it a few minutes ago but not now.
There's a cap on much tweaking Tone3000 allows so that some folks don't abuse the system lol.

Maybe try flipping it to "true" in the layer with only the 2 small dilations:

1746039758842.png


This layer should deal with the high-end anyway.
 
There's a cap on much tweaking Tone3000 allows so that some folks don't abuse the system lol.

Maybe try flipping it to "true" in the layer with only the 2 small dilations:

View attachment 42934

This layer should deal with the high-end anyway.
Hmm, yeah no dice. Would appreciate it if anyone else has something to test and can check. I literally did it a few seconds ago, but I guess I'll wait for my existing models to train in case that's a factor.
 
Hmm, yeah no dice. Would appreciate it if anyone else has something to test and can check. I literally did it a few seconds ago, but I guess I'll wait for my existing models to train in case that's a factor.
I wouldn't worry too much since that ESR is already pretty darn good anyway. The xSTD models you're training will probably dip slightly below it.
 
0.1 is a type-o?
If getting '0.01', I'd be happy enough with that.
Yeah, it was 0.01. And actually, removing the amp switcher did help lower the ESR a little more. to 0.0087.

I'm usually OK with these values as they sound good, but unfortunately here it still sounds off (newest clip is C) - the low end isn't quite right.



Training 2 xSTD models, one with the Ampete, one without. The one with the Ampete had one of the gated=true but I haven't been able to apply that since. Also doing a ToneX model just to see. Hoping I can get the low end a bit more satisfying here
 
Last edited:
Yeah, it was 0.01. And actually, removing the amp switcher did help lower the ESR a little more. to 0.0087.

I'm usually OK with these values as they sound good, but unfortunately here it still sounds off (newest clip is C) - the low end isn't quite right.



Training 2 xSTD models, one with the Ampete, one without. The one with the Ampete had one of the gated=true but I haven't been able to apply that since. Also doing a ToneX model just to see. Hoping I can get the low end a bit more satisfying here

They're pretty close tho. Takes a bit of fiddling around tho & it doesn't look like Tone3000 will allow too much wiggle room (just tried a few combinations now in their UI for a few custom architectures).
 
A couple more with xSTD:



Definitely not unhappy with how they sound but this tone seems to show more differences to my ear than I'm maybe used to. There's a kind of characteristic heft on the palm mutes that's different (and the NAM models are a teeny bit more abrasive I think?).

Got the ESR down to 0.0071 on one model. Suppose I'd need to train on colab or locally to tweak things more. LOL at how much I've done while waiting for ToneX to do a single model. Makes it so much more appealing to use NAM.
 
A couple more with xSTD:



Definitely not unhappy with how they sound but this tone seems to show more differences to my ear than I'm maybe used to. There's a kind of characteristic heft on the palm mutes that's different (and the NAM models are a teeny bit more abrasive I think?).

Got the ESR down to 0.0071 on one model. Suppose I'd need to train on colab or locally to tweak things more. LOL at how much I've done while waiting for ToneX to do a single model. Makes it so much more appealing to use NAM.

Training locally's the way to go for sure if you want to experiment. You can try some big a$$ architectures and incrementally trim it down until you strike that balance between resource usage and fidelity.

If you do train locally, I'm curious to head how many seconds an epoch takes (on the STANDARD architecture) on that Mac of yours :sofa
 
A few pages back, @DLC86 posted recommended learning rate values for xSTD. I’ve still achieved low ESRs if I forget to change them.

https://thegearforum.com/threads/tone3000-previously-tonehunt-and-tonezone3000.6919/post-320447

From what I’ve observed, xSTD brings the ESR down to the 0.001 to 0.002 range if the Standard capture is just under 0.01.
How did I miss those learning rate values? Nice one, trying another model now with the values posted there. Will add tonex to the playlist above in the next few minutes
 
A couple more with xSTD:



Definitely not unhappy with how they sound but this tone seems to show more differences to my ear than I'm maybe used to. There's a kind of characteristic heft on the palm mutes that's different (and the NAM models are a teeny bit more abrasive I think?).

Got the ESR down to 0.0071 on one model. Suppose I'd need to train on colab or locally to tweak things more. LOL at how much I've done while waiting for ToneX to do a single model. Makes it so much more appealing to use NAM.

To my ears the "No Ampete" clips sound closer to the real amp. Agree that some of them sound a bit more abbrasive.

But I would have no complaints using any of these.
 
Last couple of clips added.

I used the custom learning rate values provided above - these got the ESR way down to 0.0037 for the example without the Ampete, and 0.008 for the version with. I think clip H gets the closest to the source tone. I can still hear some differences but it's the closest so far to getting the distinctive characteristic of the amplifier correct. The low end is still not quite there, and the real amp feels a hair smoother and less abrasive.

Thanks to all those who helped with the tweaks - starting at 0.01 and ending at 0.0037 is good going.

IMO capturing tech should be as close as it possibly can to the source, so I applaud those pushing boundaries on making that accessible to others. I'll definitely be using these tweaks going forward, the differences might be small but they're often in very important parts of the tone. Even though all those clips sound passable and good enough, many of them feel like a slightly different amp to me (if Im expecting that huge low end to punch through). I think this all just makes me want to use real amps even more on projects, all kinds of modelling are awesome for what they are but there's some magic when you turn an amp on and dial a tone in. You just get THERE faster and you have something distinct with personality at the end of it.


 
Back
Top