NAM: Neural Amp Modeler

Yeah it is using them. Just wondering if that attribute speeds things up. Coz with my 4090 here I'd be expecting faster results than a 3060ti based on CUDA core count comparison, and I'm just not seeing it. Not with NAM, nor with ToneX.
 
So the 4090 has 16,384 CUDA cores.
The 3060ti has 4,864 CUDA cores.

I'd expect a pretty drastic computation time difference based on that alone to be honest. Which is indeed what I see when I do renders in Blender using CUDA.

I'm thinking the underlying Tensorflow or Pytorch libraries that NAM and ToneX are using, has some fundamental bug in the way it allocates jobs to the CUDA cores.
 
So the 4090 has 16,384 CUDA cores.
The 3060ti has 4,864 CUDA cores.

I'd expect a pretty drastic computation time difference based on that alone to be honest. Which is indeed what I see when I do renders in Blender using CUDA.

I'm thinking the underlying Tensorflow or Pytorch libraries that NAM and ToneX are using, has some fundamental bug in the way it allocates jobs to the CUDA cores.
Do you have hardware scheduling on? If I turn this off I have much better times.
 
Do you have hardware scheduling on? If I turn this off I have much better times.
Hmmm, no real difference here to be honest. I'm getting 100 epochs every 3 minutes, roughly speaking. So I think it's still gonna take about 30minutes for 1000 epochs, which is what it took with scheduling turned on.
 
Hmmm, no real difference here to be honest. I'm getting 100 epochs every 3 minutes, roughly speaking. So I think it's still gonna take about 30minutes for 1000 epochs, which is what it took with scheduling turned on.
Apologies, I was referring to Tonex.
 
Apologies, I was referring to Tonex.
How dare you.

Actually I might've seen some increase with NAM. I'm up to 625 epochs within 13minutes. So maybe there's something to the hardware scheduling thing. I'll let this play out and see how long it takes.
 
Yeah okay, I think that did speed it up. 1000 epochs done in 22minutes. So I've saved 7 minutes or so. There might even be some driver level tweaks I can do in the NVIDIA control panel. I'll check it later. Can't be arsed to do another ToneX one right now!
 
@northern_fox

When training with NAM I see this message:


Where does that variable get set, and does changing it speed up training times?

This may answer your question?


One thing to keep in mind is that the training process for wavenet is usually used for substantially bigger problems, so Steve has optimized it such that it works well for a smaller model. Sometimes that isn't the same thing as what would normally be best, and causes some of these messages too (not sure about this one)
 
Only as good as the captures themselves.
Input gain is still random from user to user and a lot of captures are made with cranked amps into load boxes that sound like shit.
I've just been focusing on the top voted ones on Tone Hunt. So far so good imo.

But yeah with any profiles/captures/etc YMMV
 
Only as good as the captures themselves.
Input gain is still random from user to user and a lot of captures are made with cranked amps into load boxes that sound like shit.

^^ This ^^ ....same issue with a fu%ck-ton of Tonex Captures .... what makes it worse is that those same people seem to also have little to no idea how to dial in an amp :(

Man there are stacks of absolutely sh$tful shockers out there, both for Tonex Land and NAM Land.

Ben
 
The one I did yesterday of my recto sounds fan-fucking-tastic, I used my Egnater Tourmaster V30 4x12 cab as the load, and it definitely sounds better than a loadbox. I had a 0.008xxx ISR at the end of it. 1000 epochs. It did take a while to train, yes. Quicker than ToneX though.

I had a thought about using the DAW to switch channels on the amp, and basically just have a long project that sends a pc to the amp, records the capture wav, sends a pc to the amp, does the next take, etc etc.
 
Back
Top