Capture your gear for free with TONEZONE3000

Honestly, I don't know 😅
I've tried too a bit to make it work locally but haven't found where to tell the trainer to disable the checks on the input file. But the message you get is a bit strange indeed, in my case it told me it was the wrong input file.

The full version of the trainer should allow any type of inputs but the instructions are not clear on where to put the config files, so I gave up on that too.
Ah that sucks... I'm also getting an error message inside the terminal window that the input file does not match any known standard input files, but the GUI gives me a slightly more detailed message:

input_error.jpg


I assume it has to do with the hash codes above, the new trainig file would need new hashes to correctly identify start and end points.

Anyway, thanks for the low aliasing training signal! If only TZ3000 would offer hyper accuracy training capabilites, I wouldn't even have the need to train the output clips locally.
 
Ah that sucks... I'm also getting an error message inside the terminal window that the input file does not match any known standard input files, but the GUI gives me a slightly more detailed message:

View attachment 38586

I assume it has to do with the hash codes above, the new trainig file would need new hashes to correctly identify start and end points.

Anyway, thanks for the low aliasing training signal! If only TZ3000 would offer hyper accuracy training capabilites, I wouldn't even have the need to train the output clips locally.
Tonezone now offers the ability to use custom architectures so maybe you can do Hyper Accuracy ones too... Anyway, as pointed out in an earlier post, xSTD offers pretty much the same accuracy as Hyper Accuracy captures (at least based on the values Andrei has shown) but models use much less cpu.
 
Thanks for including a custom architecture option! Unfortunately, I'm still not able to use my hyper accuracy model in custom mode as it exceeds the allowed architecture size.

I totally understand this limitation, as it would take your servers almost triple the amount of time of a single standard capture to generate a single hyper accuracy capture, but it would be nice to have an option where you can purchase extra server training time for the more complex models.

In any case, it's a great product as it is even if the bigger architectures remain forbidden on it.
Did you try the super inputs shared on the Facebook group :
Tried the last one Super Cobra and it’s eradicating all aliasing. Makes a huge improvement when stacking NAM (NAM pedal into NAM amp)
 
Did you try the super inputs shared on the Facebook group :
Tried the last one Super Cobra and it’s eradicating all aliasing. Makes a huge improvement when stacking NAM (NAM pedal into NAM amp)

I'm waiting for it to go "Gold" so I can splice this into the signal when reamping:

 
Tonezone now offers the ability to use custom architectures so maybe you can do Hyper Accuracy ones too... Anyway, as pointed out in an earlier post, xSTD offers pretty much the same accuracy as Hyper Accuracy captures (at least based on the values Andrei has shown) but models use much less cpu.
I've compared the standard, xstd and hyper accuracy captures and I still prefer the hyper accuracy captures with output clips based on the V3 test signal. We're not talking about anything earth-shattering here, but it's still 0.002 (hyper accuracy) vs 0.0047 (xstd). I'd love to train hyper accuracy captures with your latest low aliasing test signal, but it's currently impossible to do on TZ3000 as it exceeds their architecture size and locally there's still this error message that I posted above.

You can for example change the channel size to 32, which I use with hyper accuracy, and you'll see that TZ3000 gives you an error message. Training 90k parameters instead of 13k or so is probably too much of a burden for the servers if everybody were to train their files like that.

Have you tried the super cobra test signal? Is it much better aliasing-wise than the TTSv10? Also, is there any talk about integrating the low aliasing test signals into the local NAM trainer?
 
I've compared the standard, xstd and hyper accuracy captures and I still prefer the hyper accuracy captures with output clips based on the V3 test signal. We're not talking about anything earth-shattering here, but it's still 0.002 (hyper accuracy) vs 0.0047 (xstd). I'd love to train hyper accuracy captures with your latest low aliasing test signal, but it's currently impossible to do on TZ3000 as it exceeds their architecture size and locally there's still this error message that I posted above.

You can for example change the channel size to 32, which I use with hyper accuracy, and you'll see that TZ3000 gives you an error message. Training 90k parameters instead of 13k or so is probably too much of a burden for the servers if everybody were to train their files like that.

Have you tried the super cobra test signal? Is it much better aliasing-wise than the TTSv10? Also, is there any talk about integrating the low aliasing test signals into the local NAM trainer?
If the numbers you mention are ESR that means 0.2% error-to-signal ratio vs 0.47%, which translated in dB is something like -54dB vs -47dB, I don't think that's really hearable... and the tables Andrei shared about his tests didn't show this differences, so I don't know.

Yeah, tonezone might have some limits to avoid overloading on the remote machines.

Still haven't had time to try the "super cobra", it should have slightly better aliasing performance but at the expense of much longer training times.

Anyway, a couple days ago I made some tests comparing the amount of aliasing produced by models using TTSv10 and 3D inputs, to a plugin that allows changing the oversampling (Emissary), and basically the aliasing level is equivalent to that produced by 4x oversampling at 48 kHz... not bad but still quite far from the 32x (I assume) of an axe fx III.
 
 
Is there some new defacto standard people are using with all this stuff or is everyone still pretty much sticking to "Standard" and "Best Fit" when it comes to NAM captures in TZ? Running some standard ones now but if there's something more worth checking out I can give it a go as well.
 
Is there some new defacto standard people are using with all this stuff or is everyone still pretty much sticking to "Standard" and "Best Fit" when it comes to NAM captures in TZ? Running some standard ones now but if there's something more worth checking out I can give it a go as well.
I'm still rolling with the standard one; been experimenting a lot but always come back to the vanilla one.
 
If the numbers you mention are ESR that means 0.2% error-to-signal ratio vs 0.47%, which translated in dB is something like -54dB vs -47dB, I don't think that's really hearable... and the tables Andrei shared about his tests didn't show this differences, so I don't know.

Yeah, tonezone might have some limits to avoid overloading on the remote machines.

Still haven't had time to try the "super cobra", it should have slightly better aliasing performance but at the expense of much longer training times.

Anyway, a couple days ago I made some tests comparing the amount of aliasing produced by models using TTSv10 and 3D inputs, to a plugin that allows changing the oversampling (Emissary), and basically the aliasing level is equivalent to that produced by 4x oversampling at 48 kHz... not bad but still quite far from the 32x (I assume) of an axe fx III.
Made a test with Axefx 3 and it shows more aliasing than the Friedman NAM (made with the super 3D recently shared on the group) it’s supposed to match. AF3 Fried for sure ;)
What’s sure is that I won’t come back to standard input as I spend more time finding the amp, convincing owner to make the reamp and finding the best settings than the 10 mn more recording I needs. And as I train all of them one tonezone3000 I don’t care the extra compute time.
 
Made a test with Axefx 3 and it shows more aliasing than the Friedman NAM (made with the super 3D recently shared on the group) it’s supposed to match. AF3 Fried for sure ;)
What’s sure is that I won’t come back to standard input as I spend more time finding the amp, convincing owner to make the reamp and finding the best settings than the 10 mn more recording I needs. And as I train all of them one tonezone3000 I don’t care the extra compute time.
Would you share the graphs and setup of this comparison? Cuz that seems quite unlikely to me honestly, it's more likely you didn't properly match the gains or didn't compensate for different input level calibration... No offense if I don't trust you but I've done a lot of aliasing tests lately 😅
 
I had some time on my hands today, a new guitar AND some beer and whisky so, inevitably, I made a couple of captures. There are two captures of the same amp with the same settings (for a single coil strat) into the same load. The only difference is one was done with the V3 input and standard settings and the other with DLC86's TTS v10 and the suggested settings from this thread. To me, right now, there's an enormous difference and I absolutely love the TTS one, but, as I said, there's alcohol involved :LOL:

https://tonehunt.org/Humbug/701192e6-4f16-4eac-9feb-6e7770125083
 
I had some time on my hands today, a new guitar AND some beer and whisky so, inevitably, I made a couple of captures. There are two captures of the same amp with the same settings (for a single coil strat) into the same load. The only difference is one was done with the V3 input and standard settings and the other with DLC86's TTS v10 and the suggested settings from this thread. To me, right now, there's an enormous difference and I absolutely love the TTS one, but, as I said, there's alcohol involved :LOL:

https://tonehunt.org/Humbug/701192e6-4f16-4eac-9feb-6e7770125083
Yes found also better accuracy withTTS input as I did a null testing with the web app they provided on Facebook in their last posting (dude doesn’t know how to export an html5 to open directly in web browser but was smart enough to find the aliasing trick, that’s funny!)
 
dude doesn’t know how to export an html5 to open directly in web browser but was smart enough to find the aliasing trick, that’s funny!
Well, to be fair those are things that require knowledge in two different fields... But what do you mean? Wasn't it an .html file?
 
Replying to myself as the lonely, egomaniacal person that I am: Now, _without_ alcohol involved, the TTS capture is still clearer in the top end and closer to what the actual amp sounds like than the standard V3 capture. Good job, @DLC86 !

I am definitely no rocket surgeon (dropped out of our equivalent of high school twice), so forgive me if I'm asking stupid questions: Is it the long sweeps at the end of the reamp file that are doing the heavy lifting, so to speak? If so, could this bit be condensed and massaged to replace the sweeps already present in the V3 file?

I have tried to find information in the facebook group, but I find it a bit difficult to follow with all the super duper viper cobra etc variations :LOL:
 
Replying to myself as the lonely, egomaniacal person that I am: Now, _without_ alcohol involved, the TTS capture is still clearer in the top end and closer to what the actual amp sounds like than the standard V3 capture. Good job, @DLC86 !

I am definitely no rocket surgeon (dropped out of our equivalent of high school twice), so forgive me if I'm asking stupid questions: Is it the long sweeps at the end of the reamp file that are doing the heavy lifting, so to speak? If so, could this bit be condensed and massaged to replace the sweeps already present in the V3 file?
No, cuz how good the aliasing reduction is also depends on the length of those sweeps section in the file and also on the length and level of each sweep. Basically the neural network needs as much information as possible to reduce aliasing properly.
 
Tonezone now offers the ability to use custom architectures so maybe you can do Hyper Accuracy ones too... Anyway, as pointed out in an earlier post, xSTD offers pretty much the same accuracy as Hyper Accuracy captures (at least based on the values Andrei has shown) but models use much less cpu.
 
No, cuz how good the aliasing reduction is also depends on the length of those sweeps section in the file and also on the length and level of each sweep. Basically the neural network needs as much information as possible to reduce aliasing properly.
I see. I think :LOL:
 
Ah that sucks... I'm also getting an error message inside the terminal window that the input file does not match any known standard input files, but the GUI gives me a slightly more detailed message:


I assume it has to do with the hash codes above, the new trainig file would need new hashes to correctly identify start and end points.

Anyway, thanks for the low aliasing training signal! If only TZ3000 would offer hyper accuracy training capabilites, I wouldn't even have the need to train the output clips locally.

I've tried too a bit to make it work locally but haven't found where to tell the trainer to disable the checks on the input file. But the message you get is a bit strange indeed, in my case it told me it was the wrong input file.

The full version of the trainer should allow any type of inputs but the instructions are not clear on where to put the config files, so I gave up on that too.

You'll want to use nam-full and specify the file paths on the command line (https://neural-amp-modeler.readthedocs.io/en/latest/tutorials/full.html). The full trainer isn't user friendly in the same way but once you figure it out it's not too bad and you can do some more flexible things if you get deeper into it (e.g, I worked it into a scripted flow I have and it's now even lower effort than using the GUI).
 
Back
Top