NAM: Neural Amp Modeler

Personally I actually like it when people capture a lot of random stuff. There's a difference between doing a capture poorly, and doing a capture of things that don't have the same tones everyone wants - and the latter, I really actually enjoy. Seeing stuff like practice amps, tape players, pedals, old/dated gear (PODHD ?) is honestly really cool to me. It doesn't really matter if it's something I'd say is "good" or anything. The main thing the puts me off is poor accuracy due to noise/technique/hardware/gain staging. Otherwise, give me every odd profile you can
 
That's how you end up with 10,000 captures where 9,900 of them sounding like a skunks ass.
Reputation and known names is everything when it comes to captures from other people.
But who decides what a "known name" is? I don't like MBritt stuff, I don'tll like a lot of stuff from a lot of "known names" ¯\_(ツ)_/¯ Or I like this profile or that profile from someone well known but certainly don't like everything?
 
I don't know, maybe some kind of weighted voting system that get's more accurate over time as people try different captures and vote?
Something that will ease the choice for old and new users in the future when we have 100,000 captures.

Think of it like music, people know what good music is and it's easy to suggest some bands and albums over the others.

edit:
Maybe even a comment section for each capture so people can say what their experience was with that capture/pack.
 
Last edited:
I was reading the gpt4 paper the other day and thinking about the hybrid modeling approach people have proposed around here. I think it would be possible to embed the amp settings and just train the model with that embedding as an input similar to how gpt 4 uses a prompt. The big hurdles would be generating the training data, training time, and model size/capacity but it seems like an approach to have a full NN based amp sim.
 
I was reading the gpt4 paper the other day and thinking about the hybrid modeling approach people have proposed around here. I think it would be possible to embed the amp settings and just train the model with that embedding as an input similar to how gpt 4 uses a prompt. The big hurdles would be generating the training data, training time, and model size/capacity but it seems like an approach to have a full NN based amp sim.

Isn't this what NDSP does? And/or the parametric modeling already in NAM (but not user accessible really at this point)? I don't speak ML, but it sounds familiar. My other thought on hybrid modeling is to use ML for some parts of a traditional model, but I guess it would be pretty DSP heavy then.
 
Isn't this what NDSP does? And/or the parametric modeling already in NAM (but not user accessible really at this point)? I don't speak ML, but it sounds familiar. My other thought on hybrid modeling is to use ML for some parts of a traditional model, but I guess it would be pretty DSP heavy then.
No clue what ndsp does. Right now the capture is static for one set of amp parameters. Pre and post eq adds some flexibility, but this would be training a model where the amp knob settings are an extra input to the model during training. So you would have to find an efficient way to generate all that training data. My guess is that you wouldn't need the full 3 minute training set for each set of amp knob settings. Then each amp model would have realistic knobs to adjust the amp parameters. It would be a full NN based sim.
 
I don't know, maybe some kind of weighted voting system that get's more accurate over time as people try different captures and vote?
Something that will ease the choice for old and new users in the future when we have 100,000 captures.

Think of it like music, people know what good music is and it's easy to suggest some bands and albums over the others.

edit:
Maybe even a comment section for each capture so people can say what their experience was with that capture/pack.
I dunno I never did well with decision by committee. Might be cause I’m usually the odd man out that I don’t like sounding what guys trying to channel Nuni for example want to sound like.
 
No clue what ndsp does. Right now the capture is static for one set of amp parameters. Pre and post eq adds some flexibility, but this would be training a model where the amp knob settings are an extra input to the model during training. So you would have to find an efficient way to generate all that training data. My guess is that you wouldn't need the full 3 minute training set for each set of amp knob settings. Then each amp model would have realistic knobs to adjust the amp parameters. It would be a full NN based sim.

Yes, sorry - I meant for the QC models they have, not for the captures end users make. But also, really no way to know what's happening there, just what was rumored.
 
No clue what ndsp does. Right now the capture is static for one set of amp parameters. Pre and post eq adds some flexibility, but this would be training a model where the amp knob settings are an extra input to the model during training. So you would have to find an efficient way to generate all that training data. My guess is that you wouldn't need the full 3 minute training set for each set of amp knob settings. Then each amp model would have realistic knobs to adjust the amp parameters. It would be a full NN based sim.
This was discussed somewhat recently in a different thread, but the issue would be:
If we have 7 knobs on an amp and want to generate data for 10 positions per knob, that's that's 10^7 permutations.
If a capture signal takes 1 minute, that's 7000 days.

In interviews, NDSP claims to have an automated process that pops out amp models in half a day. Robot turns amp knobs and records training data for a few hours, then a few hours of model training, bam you have an amp model... supposedly. But I don't see how that's at all possible given the issue of permutations above. And the proof is in the pudding, NDSP is not prolifically popping out amp models, their output is rather slow...

Edit: here ya go
 
Last edited:
This was discussed somewhat recently in a different thread, but the issue would be:
If we have 7 knobs on an amp and want to generate data for 10 positions per knob, that's that's 10^7 permutations.
If a capture signal takes 1 minute, that's 7000 days.

In interviews, NDSP claims to have an automated process that pops out amp models in half a day. Robot turns amp knobs and records training data for a few hours, then a few hours of model training, bam you have an amp model... supposedly. But I don't see how that's at all possible given the issue of permutations above. And the proof is in the pudding, NDSP is not prolifically popping out amp models, their output is rather slow...

Edit: here ya go
Certainly, a lot of you brute force it. But NNs also generalize, I'm guessing you could significantly reduce the number of collected positions, and not have to do a full reamp to get the results.
 
Hello everyone, I have a created a few NAM profiles to use and share and will add be adding more stuff in the future (new site just getting started really). Thanks Steve for making this all possible. I don't do Facebook so if any you would like feel free to drop some links there and share this stuff, thanks.

 
Just put up a new model of a Boss Metal Zone MT-2 pedal set as a clean boost pedal.





Also I have some NAM related questions if anyone out there knows the answers.

Awhile ago someone uploaded a Mesa Mark V / with celestion Greenback 25's. It seems the IR was "baked" onto the tone somehow. How can this be done? And is there a way to get that IR file from the file?

Is there a link to how to do offline NAM training? I would be interested. Thanks.
 
Is there a link to how to do offline NAM training? I would be interested. Thanks.

I did this little step-by-step guide which sums up the procedure described on the video above:
Installation:

1. Download and install Git from:
https://git-scm.com/download/win

2. Download and install Anaconda from:
https://www.anaconda.com/products/distribution

3. Download and install Cuda Toolkit 11.6 from:
https://developer.nvidia.com/cuda-11-6-0-download-archive

4. Create a folder called "NAM" on your desktop

5. Open Anaconda Prompt and install NAM repository by typing this in the prompt:
# cd %userprofile%\OneDrive\Desktop\NAM
# git clone https://github.com/sdatkinson/neural-amp-modeler.git

6. Install Pytorch
# conda install pytorch pytorch-cuda=11.6 -c pytorch -c nvidia

7. Install NAM
# cd %userprofile%\OneDrive\Desktop\NAM\neural-amp-modeler
# pip install -e .


Training:

1. Download the test signal audio file from here:

Otherwise create your own one and rename it "v1_1_1.wav"

2. Record the output of your amp/pedal by feeding the test audio file to its input, then export what you recorded and save it as "output.wav"

3. Move both wav files to this folder:
%userprofile%\OneDrive\Desktop\NAM\neural-amp-modeler

4. Open Anaconda Prompt and run Python
# cd %userprofile%\OneDrive\Desktop\NAM\neural-amp-modeler
# python

5. Import torch
# import torch

6. Import the NAM "run" module
# from nam.train.colab import run

7. Run the training
# run(your_epoch_value)

8. Close the graph to start the training
To see the progress graph open another instance of Anaconda Prompt and type:
# cd %userprofile%\OneDrive\Desktop\NAM\neural-amp-modeler
# tensorboard --logdir lightning_logs
Then open the browser and go to this address:
http://localhost:6006

9. Once the training is finished close the final ESR graph window so the model is exported in this directory:
%userprofile%\OneDrive\Desktop\NAM\neural-amp-modeler\exported_models

10. Load the model file in the NAM VST plugin or standalone app which can be downloaded from here:
https://github.com/sdatkinson/NeuralAmpModelerPlugin/releases
 
I did this little step-by-step guide which sums up the procedure described on the video above:

Nice - few things to note:

  • you can run the GUI version by typing "nam" from the conda env instead of doing it via python
  • v1_1_1.wav specifically has two blips at the beginning for time alignment. If you use your own file instead, you'll have to specify delay offset manually. Best way to do it is to add your own DI in the middle of v1_1_1 and leave the beginning and end intact (delay and test). You'd also need to slightly modify core.py to do this with the GUI one, which is version locked to the official files at the moment.
  • you can also choose "feather, lite, standard" model size options from gui and easy colab. Differences are just in the channels/head size in the config.
  • you can also batch train from GUI by selecting multiple output files that will run in a row and use 'silent' to skip user having to close graphs

If you're lazy like me you can use these batch files I made to launch the GUI and logging just from an icon. Second one is relative path based, so you'd want to put it in your output folder.
 
Back
Top