NAM: Neural Amp Modeler

Finally tried making my own capture, and it was easy at least when I got the signal chain sorted. I used Tonezone3000 to create the capture and it turned out pretty well.

I'll make more captures of my BluGuitar Amp 1 Mercury Edition and try to put something up on ToneHunt.
 
Tonezone3000 REALLY has made things pretty amazing, a lot of the tediousness of Google Colab is solved here, as well as it being free. Easier time than ever to make high quality NAM models fast and with minimum effort. I actually think its surpassed the process Tonocracy had going for it now (aside from Tonocracy having some tools to measure your levels).
 
Tonezone3000 REALLY has made things pretty amazing, a lot of the tediousness of Google Colab is solved here, as well as it being free. Easier time than ever to make high quality NAM models fast and with minimum effort. I actually think its surpassed the process Tonocracy had going for it now (aside from Tonocracy having some tools to measure your levels).
I agree 100%. If they get the bulk training feature sorted, it'll be a breeze to train a reamp pack for all default architectures (nano, feather, lite & standard). There's still some benefit to training locally but mostly comes down to squeezing a bit more accuracy when there's a need for it.
I'm not sure if custom architectures will be a thing on Tonezone3000. I can see that being easily abused.
 
Re: Tonezone3000, would you folks recommend using the provided sweep or rather use a wellbalanced mixture of my playing?
the latest input.wav contains a wide range of signal types and levels to cover essentially every type of response from an amp. If you aren’t really experienced I’d say it’s the best way to go.

The wet and dry method is cool for older sessions where there might not be a reamp of the training file.
 
the latest input.wav contains a wide range of signal types and levels to cover essentially every type of response from an amp. If you aren’t really experienced I’d say it’s the best way to go.

The wet and dry method is cool for older sessions where there might not be a reamp of the training file.

Thanks. I'll try to fool around a bit, this is finally looking sufficiently easy.
 
Ok, first testrun came out amazingly well. And it really went quite fast.

So, now I need a hardware box to load things. My goldilocks form factor would be two simultaneously loadable profiles (options for serial or parallel) plus an IR loading option for both, including quick access to their main controls. Gain, volume, BMT tonestack, endless pots /w readouts. The latter would ensure I could quickly adjust either in a split second (wouldn't mind 10 controls, either, if done well).
Why two simultaneously loaded patches? Because I could simulate a 2 channel amp that way. Should as well offer a standard TS/TRS switch input.
Sure, there should be more memory slots /w MIDI and what not, but personally, I'd always just use 2 for live (doing it like that since decades already, regardless whether it's channel switcher amps, pedal platform stuff or modeling).
 
Is the Tonezone3000 input file different from the Colab file? I have some NAM captures I’d like to finish off, but have a familiar workflow going on at the moment. I do want to check out the advanced parameters of Tonezone3000 that the Colab page doesn’t offer.

My current method of training is to train 8 NAM files concurrently on Colab with 8 accounts. A bit cumbersome, but it gets me 8 Standard files trained at 750 Epochs in 1hr 9min. The training time seems to have creeped up on Colab lately. I recall being able to do 800 Epochs in 1hr 5min.

If Tonezone3000 can simplify this, I will be all for it.
 
Fwiw, I changed things from 100 to 200 epochs and, as said, the result was pretty good already. Giving the same file a try at 400 to see whether it's worth the computing time.
 
Is the Tonezone3000 input file different from the Colab file? I have some NAM captures I’d like to finish off, but have a familiar workflow going on at the moment. I do want to check out the advanced parameters of Tonezone3000 that the Colab page doesn’t offer.

My current method of training is to train 8 NAM files concurrently on Colab with 8 accounts. A bit cumbersome, but it gets me 8 Standard files trained at 750 Epochs in 1hr 9min. The training time seems to have creeped up on Colab lately. I recall being able to do 800 Epochs in 1hr 5min.

If Tonezone3000 can simplify this, I will be all for it.
same input file (at can actually support anything as a training file), and im sure itll do 800 epochs a lot quicker than 1hr 5 each.
 
Fwiw, I changed things from 100 to 200 epochs and, as said, the result was pretty good already. Giving the same file a try at 400 to see whether it's worth the computing time.
I thought jumping up from the default of 100 was needed for my captures. I usually like to go up to around 700 Epochs for distorted sounds. I used to go for 1000 Epochs, but the ESR wouldn’t improve much after 600 or so in my case.
 
Didn't even take 10 minutes to create the file with 400 epochs. When comparing with the original, there's some small differences, but not only are they getting lost in a mix, even when playing through the original (in my case the Grammatico from HX Native) vs. through the NAM file, I can't feel any differences anymore once there's any music running along.
 
I thought jumping up from the default of 100 was needed for my captures. I usually like to go up to around 700 Epochs for distorted sounds. I used to go for 1000 Epochs, but the ESR wouldn’t improve much after 600 or so in my case.
You can set (or maybe it is set by default) a target ESR in TZ3000 so the training stops when that number is reached. Or, I assume, the number of epochs you've set is done. Edit: yup, it stops at the specified number of epochs if the target ESR is not reached :)
 
Last edited:
I need to try the TZ3000 thing. I keep getting asked for NAM packs as all I’m doing right. Ow are KPA and QC.

I guess I need to figure out some damn reamping.
 
I thought jumping up from the default of 100 was needed for my captures. I usually like to go up to around 700 Epochs for distorted sounds. I used to go for 1000 Epochs, but the ESR wouldn’t improve much after 600 or so in my case.
There is a benefit to doing 1000 epochs.

People are used to tracking the ESR as an indicator of accuracy & it's correct.

However the ESR uses the time domain to compare the waveform predicted by the neural network model & the source signal.

There is another metric which tracks high-end accuracy and that one is the pre-emphasized MRSTFT.

Even as the ESR plateaus during training (around about 600 - 700 epochs), if you're tracking it, you'll see the MRSTFT still improves.

Stopping at 600 - 700 epochs gets you 99% of the juice you can squeeze from the model - going to 1000 might help fine tune the high-end just a little bit more.
 
Last edited:
@ArteraDSP , ran into an issue when loading @2112 ’s LT TV Mix 2 IR on iOS (AUM crashed but I could hear reflections from the IR before… like along reverb/delay). Not sure if a local problem or something else.

IR can be found here:

We will check this thanks for reporting.
 
Parametric NAM is a thing: https://www.neuralampmodeler.com/post/the-first-publicly-available-parametric-neural-amp-model

It's just that, for now, it's not something too many folks have attempted. I expect we'll see more stuff like this in the next year or 2.
All of our amp models are multi-channel parametric captures. We are releasing GigFast Lite V2 tomorrow that includes 4 new parametric multi-channel amps, extremely accurate tuner, wah, chorus, delay and new factory cabs.
 
Back
Top