Capture your gear for free with TONEZONE3000

As far as the wet/dry training goes, is there an ideal length of time to record &/or what to play to get the most accurate results? I’ve been trying to get around a minute of chords and single notes all over the neck and getting pretty good results, but keep wondering if there was something I could play that would optimize the training.
 
As far as the wet/dry training goes, is there an ideal length of time to record &/or what to play to get the most accurate results? I’ve been trying to get around a minute of chords and single notes all over the neck and getting pretty good results, but keep wondering if there was something I could play that would optimize the training.
The more diversified the data is, the better.
Steve's original training signal is about 3 minutes long but has a ton of diversity &, very important, dynamic range (low volume playing, hard hitting etc)
 
As far as the wet/dry training goes, is there an ideal length of time to record &/or what to play to get the most accurate results? I’ve been trying to get around a minute of chords and single notes all over the neck and getting pretty good results, but keep wondering if there was something I could play that would optimize the training.
Totally agree with @2dor 's recommendation/ Steve Atkinson, the inventor of Neural Amp Modeler, has a good tutorial on his blog: https://www.neuralampmodeler.com/post/tonezone3000-training-made-simple
 
So what if I only want to make DI NAM captures?
You can run the amp into a loadbox and get that signal as the WET. Or put a DI in between the amp & cab (something like the Behringer ULTRA-G GI100) and tap the DI signal off its XLR output.
 
Amazing! Who is paying the hosting costs for the use of the 4090s? That is some serious time/electricity on the server side.
Pg3

Soooo I posted this and closed my computer, and was shocked by all the comments! Thank you so much for the kind words and feedback. Here are the answers to hopefully most of the questions:

Background: TONEZONE3000 was co-founded by me and my childhood friend Woody. We played in a band together and studied engineering at Georgia Tech. Our goal is to make capturing tones easy, accessible, and free :)

Funding: We are a part of a VC backed studio which allows access to some GPU resources. If TONEZONE3000 becomes a thing, we'll need to create a business plan (maybe a pro-tier?). Would love to hear what features you’d consider paying for! For now, everything is totally free!

GPUs: Your tones/models are trained in the cloud on RTX 4090s. We partnered with a handful of GPU providers for this.

File Storage: Your files are securely stored on Amazon S3, accessible only to you and those you share the link with. They are not visible to other TONEZONE3000 users, and we do not use them for any other purposes.

Privacy Policy / Terms: We launched TONEZONE3000 four weeks ago, without knowing if anyone would use it, so we didn’t prioritize a privacy policy or even an “About” page lmao. Now we’re working on a privacy policy and terms. Here’s the current draft but please let us know your feedback !!!

1. You own your models and audio files. TONEZONE3000 does not own them and does not have the right to sell them.
2. TONEZONE3000 may use the data for research and to improve our services.

One example of this is our new feature called "Best Fit." With "Best Fit," you get the same ESR faster without sacrificing quality. You can read more about it here: https://tonezone3000.com/best-fit
 
Launched a bunch of updates to TONEZONE3000 today. Thanks to everyone who gave us feedback/ideas :)
  • Share to ToneHunt: With a single click, TONEZONE3000 can now automatically upload your tone to ToneHunt. If you're unfamiliar, ToneHunt is a community for sharing and discovering tones. More exciting news next week 👀
  • Edit Tone: Easily update the gear type, name, and description.
  • Improved Tone Details: Tone pages now display training type (Sweep or Dry/Wet) and the model's size/architecture.
  • About Page: The usual plus some FAQs to help new users get started with capturing their tones!
  • Privacy & Terms: Bottom line: you own the files you upload and the models you create. TONEZONE3000 will not sell them!
These features are live now! Check them out: https://tonezone3000.com

What should we build next?
 
What should we build next?
You’ve done a great job at tackling some of the main barriers of entry for people and making it as simple as I can possibly imagine, so with that in mind:

- batch processing (both of training files, and for dealing with naming/metadata of them). Check out tools like Myriad and A Better Finder Rename for the kind of batch ideas I have in mind.
- more tools for organisation of files. Maybe using metadata to organise them into folders based on amp/channel/modes/date. Something that can update on the fly, like a smart itunes but for NAM files
- An easy process for making “hyper accuracy” models
- maybe something geared towards parametric models (maybe pedals are a good place to start?)
- ability to audition models with your own IR loaded (maybe this is possible already?)

I guess it’s looking at what is most tedious to do (naming/organising/training files), and then making a way to find what you need with the least fuss.
 
You’ve done a great job at tackling some of the main barriers of entry for people and making it as simple as I can possibly imagine, so with that in mind:

- batch processing (both of training files, and for dealing with naming/metadata of them). Check out tools like Myriad and A Better Finder Rename for the kind of batch ideas I have in mind.
- more tools for organisation of files. Maybe using metadata to organise them into folders based on amp/channel/modes/date. Something that can update on the fly, like a smart itunes but for NAM files
- An easy process for making “hyper accuracy” models
- maybe something geared towards parametric models (maybe pedals are a good place to start?)
- ability to audition models with your own IR loaded (maybe this is possible already?)

I guess it’s looking at what is most tedious to do (naming/organising/training files), and then making a way to find what you need with the least fuss.
wow super helpful thank you!!!
 
wow super helpful thank you!!!
Less helpful than Mirror but as someone who is absolutely new to even reamping. A video walkthrough of the whole process of using Tonezone would be cool!

I'm sure it's time consuming and may not be worth the squeeze but just a thought!
 
What should we build next?

It would be awesome to have a checkbox, instead of a drop-down menu, to select all the architectures we want from 1 single reamp.

To explain myself: I have a reamp for which I want to train models with all architectures (Standard, Lite, Feather, Nano).

Right now, I need to upload the same reamp for each architecture.

Even if TZ3000 in the backend seizes different GPUs for the training & runs separate instances, having a checkbox that lets me specify which architectures I want would be slightly better from a user experience PoV.
 
It would be awesome to have a checkbox, instead of a drop-down menu, to select all the architectures we want from 1 single reamp.

To explain myself: I have a reamp for which I want to train models with all architectures (Standard, Lite, Feather, Nano).

Right now, I need to upload the same reamp for each architecture.

Even if TZ3000 in the backend seizes different GPUs for the training & runs separate instances, having a checkbox that lets me specify which architectures I want would be slightly better from a user experience PoV.
Got it! We will prioritize this feature for our roadmap.
 
Hi @staas

I am having occasional issues with the sweep signal - when I upload the processed signal it sometimes generates an error saying it is the wrong length etc - possibly because it is hard to exactly cut the processed audio at the same location as the test signal and they apparently have to be identical. Would you happen to know the exact tempo I should set my DAW so it ends up exactly on a measure on the timeline

Thanks in advance!

Ps sorry for the pm — meant to post here
 
Hi @staas

I am having occasional issues with the sweep signal - when I upload the processed signal it sometimes generates an error saying it is the wrong length etc - possibly because it is hard to exactly cut the processed audio at the same location as the test signal and they apparently have to be identical. Would you happen to know the exact tempo I should set my DAW so it ends up exactly on a measure on the timeline

Thanks in advance!

Ps sorry for the pm — meant to post here
If you're doing the standard NAM reamp stuff then if you set the project to tempo 120 it should all line up easily. Mine just snaps to the grid in reaper, do the export of the reamped file and throw it into local training or TZ3000, never had a problem with it auto aligning (if I did it was some other weird factor like using a noisegate which I havent done in years).
 
What should we build next?
First of all, thank you guys at TZ3000 for this awesome project! The incredibly fast parallel training of multiple output clips is so much better than local training and I'm saying that as someone with an RTX 4090 GPU.

Here would be my list of wishes:

  • Bulk Downloading: Please add a download button to the "Your Tones" page so that we can mark all the trained models that we want to download at once. Maybe I'm missing something here, but so far, I have to click on every model separately and only then am I able to download it.
  • Bulk Uploading: It should work the same way as choosing many different output clips for the local trainer, where you can define a core set of descriptions and then start the training for all of them.
  • Multi Architecture Training: I usually train standard and complex models for each output clip and I'd love to have that ability on TZ3000 as well.
  • Basic Parametric Models: Since most guitar pedals and poweramps have three or less knobs, it would be nice to train them as parametric models instead of sets of 50+ captures.
  • Hyper Accuracy Mode: A couple of guys have already mentioned it here and in my private messages that they'd like to try the hyper accuracy mode for their own captures. I've modified the core.py file to include a new architecture called "complex". Here it is: https://paste.ofcode.org/36GiiSFkJrSTkySVjCYnEM9
Since I've only created this code file today, I'm not sure if the code above uses the correct lr and lr_decay values (I used to change them in the def train(...) section) for the complex training. I'd be grateful If an experienced python programmer could chime in and confirm if the following lines work as intended in the code that I posted above:

Python:
    if architecture == Architecture.COMPLEX:
        lr = 0.001
        lr_decay = 0.001
These lines are integrated in the def train() section (row 1418 to 1420). The intention behind this if statement is that lr should be 0.004 and lr_decay should be 0.007 (standard values) for any architecture, unless it's the complex one, then both variables need to be set to 0.001. This core.py file can be added to the following directory in case the github repository was installed via the pip install command via anaconda prompt:

C:\Users\ ... \anaconda3\Lib\site-packages\nam\train

Same goes for other local installation methods, simply open the nam\train folder, copy the code from the website above, create a new text file, paste the code in there, save it as core.py and copy the core.py file into the nam\train directory.

You guys at TZ3000 only have to include this section where the complex architecture is defined, as the lr and lr_decay variables are already exposed to the user:

Python:
        Architecture.COMPLEX: {
            "layers_configs": [
                {
                    "input_size": 1,
                    "condition_size": 1,
                    "channels": 32,
                    "head_size": 8,
                    "kernel_size": 3,
                    "dilations": [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512],
                    "activation": "Tanh",
                    "gated": False,
                    "head_bias": False,
                },
                {
                    "condition_size": 1,
                    "input_size": 32,
                    "channels": 8,
                    "head_size": 1,
                    "kernel_size": 3,
                    "dilations": [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512],
                    "activation": "Tanh",
                    "gated": False,
                    "head_bias": True,
                },
            ],
            "head_scale": 0.02,
        },

So far, I've settled for 1200 epochs for the hyper accuracy training, mainly because I preferred models with longer training times, but this is based on an experiment with two output clips and various epoch counts (800, 1000, 1200, 1400, 1600). It's not very scientific, I know, so maybe 1000 epochs could be good enough.
 
First of all, thank you guys at TZ3000 for this awesome project! The incredibly fast parallel training of multiple output clips is so much better than local training and I'm saying that as someone with an RTX 4090 GPU.

Here would be my list of wishes:

  • Bulk Downloading: Please add a download button to the "Your Tones" page so that we can mark all the trained models that we want to download at once. Maybe I'm missing something here, but so far, I have to click on every model separately and only then am I able to download it.
  • Bulk Uploading: It should work the same way as choosing many different output clips for the local trainer, where you can define a core set of descriptions and then start the training for all of them.
  • Multi Architecture Training: I usually train standard and complex models for each output clip and I'd love to have that ability on TZ3000 as well.
  • Basic Parametric Models: Since most guitar pedals and poweramps have three or less knobs, it would be nice to train them as parametric models instead of sets of 50+ captures.
  • Hyper Accuracy Mode: A couple of guys have already mentioned it here and in my private messages that they'd like to try the hyper accuracy mode for their own captures. I've modified the core.py file to include a new architecture called "complex". Here it is: https://paste.ofcode.org/36GiiSFkJrSTkySVjCYnEM9
Since I've only created this code file today, I'm not sure if the code above uses the correct lr and lr_decay values (I used to change them in the def train(...) section) for the complex training. I'd be grateful If an experienced python programmer could chime in and confirm if the following lines work as intended in the code that I posted above:

Python:
    if architecture == Architecture.COMPLEX:
        lr = 0.001
        lr_decay = 0.001
These lines are integrated in the def train() section (row 1418 to 1420). The intention behind this if statement is that lr should be 0.004 and lr_decay should be 0.007 (standard values) for any architecture, unless it's the complex one, then both variables need to be set to 0.001. This core.py file can be added to the following directory in case the github repository was installed via the pip install command via anaconda prompt:

C:\Users\ ... \anaconda3\Lib\site-packages\nam\train

Same goes for other local installation methods, simply open the nam\train folder, copy the code from the website above, create a new text file, paste the code in there, save it as core.py and copy the core.py file into the nam\train directory.

You guys at TZ3000 only have to include this section where the complex architecture is defined, as the lr and lr_decay variables are already exposed to the user:

Python:
        Architecture.COMPLEX: {
            "layers_configs": [
                {
                    "input_size": 1,
                    "condition_size": 1,
                    "channels": 32,
                    "head_size": 8,
                    "kernel_size": 3,
                    "dilations": [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512],
                    "activation": "Tanh",
                    "gated": False,
                    "head_bias": False,
                },
                {
                    "condition_size": 1,
                    "input_size": 32,
                    "channels": 8,
                    "head_size": 1,
                    "kernel_size": 3,
                    "dilations": [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512],
                    "activation": "Tanh",
                    "gated": False,
                    "head_bias": True,
                },
            ],
            "head_scale": 0.02,
        },

So far, I've settled for 1200 epochs for the hyper accuracy training, mainly because I preferred models with longer training times, but this is based on an experiment with two output clips and various epoch counts (800, 1000, 1200, 1400, 1600). It's not very scientific, I know, so maybe 1000 epochs could be good enough.
Thank you so much for the amazing feedback! We're working on a big update that will include some of your requests. I handle design at TONEZONE3000, so I'll share your code with my co-founder, who manages the engineering.
 
Back
Top