NAM: Neural Amp Modeler

Yes - you can use checkpoint-based training to resume training from a given checkpoint allowing for such tweaks. It's a bit niche but can be done.
Ok, I see Steve implemented NAM import in some of the latest versions so you can now also open a NAM file instead of checkpoint file. So you could download a NAM file from https://www.tone3000.com/ and use it as starting point for your own capture I guess. Wonder how much time that would save and what would the results be when compared to full training using you own data set...
 
Ok, I see Steve implemented NAM import in some of the latest versions so you can now also open a NAM file instead of checkpoint file. So you could download a NAM file from https://www.tone3000.com/ and use it as starting point for your own capture I guess. Wonder how much time that would save and what would the results be when compared to full training using you own data set...
You're still going to need the wav files as well. As for the time things take to converge - can't generalize; it's going to vary from case to case
 
I put together a quick video to help users get the best experience while using calibrated profiles:



EDIT: I must have fat-fingered something & the first vid got rendered at a crap resolution; I've uploaded a new take
EDIT2: audio's fixed now
 
Last edited:
1753298646199.png
 
Just to save myself needlessly running GPU cycles to test with.....

xSTD with 0.0064 and 0.0023 learning and decay rate seems to get about 1/2 the ESR of default NAM training settings. So it's a good bit more accurate as far as ESR numbers go. Is this down to the tweaks to the code, or by altering the learning and decay rate?
 
Just to save myself needlessly running GPU cycles to test with.....

xSTD with 0.0064 and 0.0023 learning and decay rate seems to get about 1/2 the ESR of default NAM training settings. So it's a good bit more accurate as far as ESR numbers go. Is this down to the tweaks to the code, or by altering the learning and decay rate?
It's actually a combination of both. You can tweak the architecture but that alone may not render the best results by itself. Parameters such as learning rate can have quite a noticeable effect: it tells the neural net what magnitude of adjustments it is allowed to make to its weights to match the output. Too high of a learning rate, the training can vary widely & not be able to converge to a good ESR. The decay cuts that learning "amount" defined by the learning rate with each epoch. Essentially, as your training progresses, the network will end up only making very fine tweaks overall to help it converge.
Usually when you create a new architecture or augment the training signal, the learning rate & decay need to be tweaked at the very least.

There's also the NY parameter which defines how many samples of audio thr trainer looks at. By default it's abou 170ms .. tweaks to it almost always require changes to the architecture/dilations at the very least.
 
Back
Top