NAM: Neural Amp Modeler

It's a bit ironic to me that the QC could capture a whole amp pretty well, but could not get a simple tubescreamer as relatively correct because of the clean blend.

Well, while I have pretty much no idea about all the technology behind things, I think that even those neural network learning "amp sims" still rely on a certain foundation - as in "being a guitar amp", in layman terms. So they might be able to do anything any more or less typical amp is doing, but once things go beyond that, that might simply be outside of their habitat.
Which is why I also think all these things would have a pretty tough time to capture some truly whacko stuff, such as the mentioned Guitar Rig patch which would actually *raise* its drive when you turn your guitar volume down (at least at first, quite obviously that's impossible once the guitar volume is approaching zero).

Too bad, really, as at least partially that's what'd be interesting for me (I actually don't need gazillions of captures from whatever standard amps). Imagine a combination of two amps, one pretty clean, the other pretty much overdriven and you'd feed them through a crossover, so all lower end things would stay clean with the drive only softly kicking in above a certain frequency. That'd be fantastic and (used in smaller doses) highly useful, too.
 
I've noticed that my tube amp doesn't go below about 0.04 ESR, that's with a real cab and a DI signal tapped in parallel.
Maybe a reactive load box is more predictable than a cab and allows for lower ESR values?

I have a negative feedback pot on the back of the amp and I set it quite low, close to a JCM800 value, so the cab resonates almost freely.
I get a much lower ESR in a much shorter time with the solid state amp, so I think the cab resonance when paired with a tube amp creates a time variant response that is not in perfect sync throughout the whole training input signal, hence ESR can't drop below a certain threshold due to this randomness.

I go for about 1000 epochs, beyond that it's a flat line and a waste of energy.

esr.png




54 minutes and a 1000 epochs later.
ESR final.png
 
Last edited:
Well, while I have pretty much no idea about all the technology behind things, I think that even those neural network learning "amp sims" still rely on a certain foundation - as in "being a guitar amp", in layman terms. So they might be able to do anything any more or less typical amp is doing, but once things go beyond that, that might simply be outside of their habitat.
Which is why I also think all these things would have a pretty tough time to capture some truly whacko stuff, such as the mentioned Guitar Rig patch which would actually *raise* its drive when you turn your guitar volume down (at least at first, quite obviously that's impossible once the guitar volume is approaching zero).

Too bad, really, as at least partially that's what'd be interesting for me (I actually don't need gazillions of captures from whatever standard amps). Imagine a combination of two amps, one pretty clean, the other pretty much overdriven and you'd feed them through a crossover, so all lower end things would stay clean with the drive only softly kicking in above a certain frequency. That'd be fantastic and (used in smaller doses) highly useful, too.
It looks like it uses wavenet based on the code. https://www.deepmind.com/blog/wavenet-a-generative-model-for-raw-audio

These things have nothing to do with guitar amps tbh. If you can feed the NN inputs and outputs it can probably learn the mapping. https://en.wikipedia.org/wiki/Universal_approximation_theorem?wprov=sfla1
 
Until you know what dBu the capture is made for and your audio interface instrument input dBu, it means nothing.
When your capture and input dBu match, attenuate from there but never boost.

Thanks. Interesting. when I ran my KPA for a few years it was clear to me that it "tracked" the Gain nicely when turning it down below the Stock Profiled amount - but when I started to increase it, all it seemed to do was add a generic-standard-clean boost - this was later confirmed by CK - more-so, CK also confirmed that it was the same generic-standard-clean boost being added regardless of the individual-unique amp Profile made - probably one of the reasons it lost - and still does lose - its original amps character very quickly when you start to increase the gain.

Ben
 
Well, while I have pretty much no idea about all the technology behind things, I think that even those neural network learning "amp sims" still rely on a certain foundation - as in "being a guitar amp", in layman terms. So they might be able to do anything any more or less typical amp is doing, but once things go beyond that, that might simply be outside of their habitat.

Too bad, really, as at least partially that's what'd be interesting for me (I actually don't need gazillions of captures from whatever standard amps). Imagine a combination of two amps, one pretty clean, the other pretty much overdriven and you'd feed them through a crossover, so all lower end things would stay clean with the drive only softly kicking in above a certain frequency. That'd be fantastic and (used in smaller doses) highly useful, too.
Neural networks don't have a concept of a "guitar amp" or "guitar pedal". You could run a kazoo into them and get something out. The component modeling vs neural network approach is basically a question of "do you want it to behave like the modeled amp or do you want it to just sound like it". The neural networks are good at the "sound like it" part.

For image processing neural networks are getting into interesting places like being able to turn some minimal stick figures to a character drawn in a particular style - creating new art. For audio people are overly concerned with replicating existing gear when a future application of neural networks might be something like "input a riff from your favorite song and have it churn out a model that sounds like the guitar on that song" or giving it a prompt like "I want to sound like Yngwie and Petrucci's love child" and get a guitar sound in that vein.

Btw you can totally do that two amp setup with crossover in Fractal Axe-Fx 3 or FM9. The problem is that guitar has a lot of overtones so it doesn't end up being a perfect separation. A more ideal solution would involve a pickup capable of separate output for low and high strings. Another interesting approach is to vary EQ based on pitch followers.
 
Btw you can totally do that two amp setup with crossover in Fractal Axe-Fx 3 or FM9.

You can do that with pretty much any actual modeler (sometimes a little more trickery is involved) - but I'd rather be able to merge the entire shebang into on single, easy to deal with block. This is basically one of the main reasons I'm interested in capturing tech. If it was for any kind of "standard" amp sounds, I will never need anything else than what I already have anymore. Yeah, ok, I defenitely wouldn't mind a bunch of accurate Dumble captures, but it's really nothing I'd necessarily pay for because I can already get into that realm easily. It's really the things "normal" amps can't do easily I'm interested in.
Guess I'll be able to find out myself at one point in this year (will hopefully be able to afford a new Macbook). But then, I'd still want a hardware box to do these things (as in loading captures).
 
Yeah it is definitely using the cuda cores, I just has to switch one of the graphs as suggested by Deadpan.



Notice the negative sign: -dBFS in DAW.
When you feed the interface lower voltage, you will see minus dBFS in your DAW.

The 0.5v = -15.8dBFS is the result of 20log for +12dBu, ie. 20*log(0.5/3.084) <- google that line.
Hi again @James Freeman !! Well, see if I'm doing it right:
First, I'm a player only user (I don't have an amp to model). That said, I calibrated my interface's gain so that it receives the maximum guitar signal without clipping. Then I fed it with 1V and adjusted the DAW to be showing -9.8dBFS. And that?
 
That said, I calibrated my interface's gain so that it receives the maximum guitar signal without clipping.
No, keep the gain at zero, then feed 1v and adjust in DAW to read -9.8dBFS before the plugin, that will calibrate your input to +12dBu.
But then again, that means nothing if the captures are made with random reamp levels, we want the capture and input to have the same dBu.
 
I've noticed that my tube amp doesn't go below about 0.04 ESR, that's with a real cab and a DI signal tapped in parallel.
Maybe a reactive load box is more predictable than a cab and allows for lower ESR values?

It isn't as much about predictability (cabinets and microphones respond very predictably) as it is about time-based response. The NAM WaveNet models have a fairly small temporal receptive field. A close-mic'd cab won't have a ton of temporal response (which is why quite short IRs work), but it can enough to make it difficult.

Since capturing cabinet IRs works so well, it is a much better approach to handle them that way, and only use NAM to capture the amplifier.
 
  • Like
Reactions: Elf
Hi Mike, good to see you here.
I don't think I said anything about micing a cab. I said I use my Cab as load and I tap the electric signal in parallel with it.
Since tube amps have low damping factor the cab resonates acoustically in different ways depending on what I've played a moment earlier, hence the electrical signal I capture has unpredictable time-based response.
 
I don't think I said anything about micing a cab. I said I use my Cab as load and I tap the electric signal in parallel with it.

Got it - didn't read your message correctly the first time. If you haven't already, it might be interesting to null test two separate captures to see how much variation you get.
 
I've noticed that my tube amp doesn't go below about 0.04 ESR, that's with a real cab and a DI signal tapped in parallel.
Maybe a reactive load box is more predictable than a cab and allows for lower ESR values?

Using extra guitar in the v1_1_1 file can be really helpful for this. Which amp/trainer (gui)? I've been able to get down to 0.009 - 0.02 range with my JCM800 at MV set to 6 using a DI box and cab, but I think this is definitely dependent on a lot of factors. Some amps just don't want to go that low no matter what, it seems like. Have you tried using a reamp box for comparison?

And also, look at the final plot / compare the results audibly. As we established, ESR isn't an end all number

Edit: also welcome Mike!
 
If you haven't already, it might be interesting to null test two separate captures to see how much variation you get.
I suppose I can try, but so far the results are great both audibly and visually in a frequency analyzer.

JCM800 at MV set to 6 using a DI box and cab
That must be excruciatingly loud.
The cab is in the room with you?

And also, look at the final plot / compare the results audibly. As we established, ESR isn't an end all number
The end result is fantastic.
I've played, recorded and compared, they sound and feel identical to me.
 
I suppose I can try, but so far the results are great both audibly and visually in a frequency analyzer.

Agreed that at the end of the day, "if it sounds good, it is good". Still, I think it is an interesting exercise to see how close we can get, and try to understand what the limiting factors are.

Another thing to do would be to line up your input and capture files, and see how much longer it takes your capture to fall to the noise floor.
 
That must be excruciatingly loud.
The cab is in the room with you?

It was - I had isolating earplugs, and construction grade ear protectors over them and it was still way too loud for comfort! I actually felt pretty physically rattled afterwards hehe. You can find them in here - the ones labeled mesaOS are the DI box, and suhr are load box. I intentionally did it really, really loudly to make sure and get the biggest difference I could.

A fairly big difference in this case, but that isn't always the case - I think it's because I pushed it really hard. Also sorry about mismatched volume levels - this was a while back before I made my process more uniform


It's the same one in my profile pic:

iM8zPTdjcqnlvUMknIWseaBmzVjD1PyQ4xmh_SZvMI9OdmL0MLJm_zTiGHd58ZHzbbohae7zsMkxyHYA4eMtBVbKDquXRRfhmxEsQ3C7gQFWoGpI7e1hVe-o_MYUfDt5GMpuu8KrqfzrmFG6N0wiOoXj6yxTsIrCKzlfMdzl9iCjDc6RJlcckffw93wgdOrQqO-H6PLBBqXXLKhHvPmXIcfEFH2AguVUlFkWo2iJaVXj7lH5OrVyjyPEgdX1I0GeRahFr6zX56ljPsOR5dl4OpRFHzXEdHEzd4xn0UEKZ2SQXzX1MpVhIwimMWo_6jwmTBbLMFT5qQizc5-bcdnbjTymYH8BDpmt27L3vgej4TCmuQWTeatq_MNgmSmmTUKmdnp9IR4kkL19eE2gNyXdQSRN0kd2wmGmuLPEwfUEKt8JlfLcgoKloJ2uspGXCcFjIM2bhCk6wN6KhgOCtSSmt3BtsPCrPhXPM1cp08vMi474ge9oUfTbbKHscQIsMiZFjFFAqhnXYnA-AtKZWOqLLDkCR2I13fYMAjqrpRFbcj_3iwDM9E-BLgK9FQ18Qu-5qEFDEiLrhq-T8-3t_-mmFW9cuA5gDbGLW6MayvZMamW0BfsDCmAWm9VGIMgGPayNcqLlk_XyVmQ38DkgO0rVtKg_PKhW23jwjf7-GNEx-Ahvhcff2pU9gsKO0NPMjhbDPJQQkNFi1q_B6cdJkKKy6t1JRScPo2cqOPunOw4GoXrg0UfAAQHFjqpYivCKJeNllmUAmUY-9SwVzOSAIhLSGA0uiKE1npsII823xyMJ2SYqpIw5kEMmPij5bg4JbXRIQow6lJpI9p7qpRSsIgLxqVY3pTAKxRYxyJBOgTQbcbUt68lD45U2nM1hxy7mj8ccUA4DvHUzHvciS_g3_vvs4YwI1O5a4u7QDovuLfNIY3y5kqprmT5ZvZlesckljNLMZ6SLn7QY79UHIaW2ZHI=w674-h898-s-no
 
Agreed that at the end of the day, "if it sounds good, it is good". Still, I think it is an interesting exercise to see how close we can get, and try to understand what the limiting factors are.

Another thing to do would be to line up your input and capture files, and see how much longer it takes your capture to fall to the noise floor.
It would be cool just to see what the irreducible error is, obviously the noise power is the limit.
 
Okay - here's a little example of NAM capturing parallel clean/overdriven tracks. For one, I combined two amp captures (lee jackson GP1000 and a clean amp), and for the other I just used the darkglass B7k plugin with its dry blend on. Null test included for each!

In case it's too hard to read: it goes Source -> NAM -> Null for the amps (purple) and then the DG (pink)

PS - ignore the tone - this is only to demonstrate the blended capture aspect, not to sound like anything.

 
Okay - here's a little example of NAM capturing parallel clean/overdriven tracks. For one, I combined two amp captures (lee jackson GP1000 and a clean amp), and for the other I just used the darkglass B7k plugin with its dry blend on. Null test included for each!

In case it's too hard to read: it goes Source -> NAM -> Null for the amps (purple) and then the DG (pink)

PS - ignore the tone - this is only to demonstrate the blended capture aspect, not to sound like anything.


Is it just me, or does it sound like there is dry signal bleeding through all of those clips except the nulls? :idk

I will say the nulls are impressive.
 
Back
Top