- Messages
- 5,114
If they REALLY want me to stand up and take notice:Neural be like... here you go `next major update`
And this is me..
v4.0: ADD USB AUDIO CHANNELS TO LOOP/ SEND/ RECEIVE BLOCKS.
If they REALLY want me to stand up and take notice:Neural be like... here you go `next major update`
And this is me..
I still have to do IRs. That's the real scary part.I’m working up the courage to do a full factory reset and start from scratch. Would probably want to identify a few key presets and upload those to the cloud first. But as things stand, after using the QC for ~3 years, I still have trouble searching for captures on the device.
Well, no trouble searching; just not much luck finding.
Hmmm. Not sure really. I've got 9 amps here I want to capture, and wanted a folder for the amp.. then two sub folders for amp-only and amp+cab options. So that right there is 27 folders!Dunno. I've only made 7 so far. 30 seems like overkill to me.
I think you can have up to 30 primary folders. Not 30 folder altogether. Wait a sec. I'll test.Hmmm. Not sure really. I've got 9 amps here I want to capture, and wanted a folder for the amp.. then two sub folders for amp-only and amp+cab options. So that right there is 27 folders!
Easy workaround: don't use IR's.I still have to do IRs. That's the real scary part.
What I don't really get is NDSP make a big deal about their TINA system, and how quickly they can churn out models, but this doesn't seem to be reflected in their firmware updates.
SucckkkkaaagggeeeeeeOh! It's 30 folders in total.
What I don't really get is NDSP make a big deal about their TINA system, and how quickly they can churn out models, but this doesn't seem to be reflected in their firmware updates.
It isn't that clear to me how it would speed up model creation. I think I get the robot part of it.... have a robot turn the amps knobs for you.... setup a bunch of Reaper scripts or Python control to build a system where you can loop record and label audio with the relevant names and meta-data that match the takes and the amp settings.... and this means you capture enough reamped recordings through the amp at each of those settings... without having to babysit the system too much... walk away and come back a day later and you've got all your recordings in order to train your model.... There's definitely time that can be saved here by automating the entire system.There's nothing to get because it makes no sense whatsoever. Another unforced communication error where they - under no public pressure whatsoever - brag about something that they know is inaccurate and then contradict themselves further in subsequent interviews.
There's never any accountability or consequences, so why wouldn't they keep doing it?
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.It isn't that clear to me how it would speed up model creation. I think I get the robot part of it.... have a robot turn the amps knobs for you.... setup a bunch of Reaper scripts or Python control to build a system where you can loop record and label audio with the relevant names and meta-data that match the takes and the amp settings.... and this means you capture enough reamped recordings through the amp at each of those settings... without having to babysit the system too much... walk away and come back a day later and you've got all your recordings in order to train your model.... There's definitely time that can be saved here by automating the entire system.
But the time you save there, you're gonna spend on training the model because you have a lot more data to account for???
What I don't quite get is how you plug in all of that data into a machine learning engine that can produce a model that incorporates all of the data.
Those will go fast. They're clearly intended for broad strokes like [Band Name] or [2024] or [High Gain]... Beyond that, you're still going to have to rely on a personal naming convention of some kind. What a strange thing to skimp on. Directory structure and naming has to be pretty cheap from a memory perspective.Oh! It's 30 folders in total.
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.
If they got all of that right, they could theoretically do full amp modeling with no human interaction, though they'd presumably want to QA to look for anomolies.
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.
Well, that explains the number of new amp models in their recent firmware updates. Maybe they built TINA without doing the math in advance.It takes about 3 minutes to capture an amp using QC. Assuming their internal tech takes the same amount of time, and let's assume that the amplifier has these knobs:
Gain
Bass
Middle
Treble
Depth
Resonance
Master
And assume you're capturing 10 positions of each knob. That would be 10,000,000 permutations. It would take approximately 57 years of continuous profiling to capture all 10 million permutations, assuming it takes 3 minutes per permutation.
If you assume 3 positions of each knob, it would take approximately 4.56 days of continuous profiling to capture all 2,187 permutations, assuming it takes 3 minutes per permutation.
So I'd think interpolation would almost certainly be necessary, even with robotic automation of knob positions in sync with a scripted recording+capturing system.
Well, that explains the number of new amp models in their recent firmware updates. Maybe they built TINA without doing the math in advance.
It was also really weird that they made the TINA announcement at the same time as the big PCOM (finally) announcement.What's funny to me is how NDSP annonuced this TINA thing almost three months ago, only to later note that CorOS 3.1.0 is including no new amps
It was also really weird that they made the TINA announcement at the same time as the big PCOM (finally) announcement.
Are we talking about random "seed" values, with subsequent values chosen according to extent to which deltas have an impact? Otherwise, I can't see any value in randomizing versus using an even distribution (however coarse) across the full range. Random distributions might be better, and they might be worse... at random.You can read about NeuralDSP's approach here: https://arxiv.org/html/2403.08559v1
They don't use just X fixed values for each knob but a randomized sample. While this might mean that the model is not accurate at every single possible permutation, in practice it's probably in that "nobody cares" territory where you can't really notice it.