NDSP Quad Cortex

Neural be like... here you go `next major update`
Oscar Peterson Smile GIF by Jazz Memes


And this is me..
Angry Fed Up GIF
If they REALLY want me to stand up and take notice:

v4.0: ADD USB AUDIO CHANNELS TO LOOP/ SEND/ RECEIVE BLOCKS.
 
I’m working up the courage to do a full factory reset and start from scratch. Would probably want to identify a few key presets and upload those to the cloud first. But as things stand, after using the QC for ~3 years, I still have trouble searching for captures on the device.

Well, no trouble searching; just not much luck finding. ;)
I still have to do IRs. That's the real scary part.
 
Dunno. I've only made 7 so far. 30 seems like overkill to me.
Hmmm. Not sure really. I've got 9 amps here I want to capture, and wanted a folder for the amp.. then two sub folders for amp-only and amp+cab options. So that right there is 27 folders!
 
Hmmm. Not sure really. I've got 9 amps here I want to capture, and wanted a folder for the amp.. then two sub folders for amp-only and amp+cab options. So that right there is 27 folders!
I think you can have up to 30 primary folders. Not 30 folder altogether. Wait a sec. I'll test.

Also, didn't even notice there were sub-folders 'til you said it not. I'm going to organise a little bit more.
 
What I don't really get is NDSP make a big deal about their TINA system, and how quickly they can churn out models, but this doesn't seem to be reflected in their firmware updates.

There's nothing to get because it makes no sense whatsoever. Another unforced communication error where they - under no public pressure whatsoever - brag about something that they know is inaccurate and then contradict themselves further in subsequent interviews.

There's never any accountability or consequences, so why wouldn't they keep doing it?
 
What I don't really get is NDSP make a big deal about their TINA system, and how quickly they can churn out models, but this doesn't seem to be reflected in their firmware updates.

There's nothing to get because it makes no sense whatsoever. Another unforced communication error where they - under no public pressure whatsoever - brag about something that they know is inaccurate and then contradict themselves further in subsequent interviews.

There's never any accountability or consequences, so why wouldn't they keep doing it?
It isn't that clear to me how it would speed up model creation. I think I get the robot part of it.... have a robot turn the amps knobs for you.... setup a bunch of Reaper scripts or Python control to build a system where you can loop record and label audio with the relevant names and meta-data that match the takes and the amp settings.... and this means you capture enough reamped recordings through the amp at each of those settings... without having to babysit the system too much... walk away and come back a day later and you've got all your recordings in order to train your model.... There's definitely time that can be saved here by automating the entire system.

But the time you save there, you're gonna spend on training the model because you have a lot more data to account for???

What I don't quite get is how you plug in all of that data into a machine learning engine that can produce a model that incorporates all of the data.
 
It isn't that clear to me how it would speed up model creation. I think I get the robot part of it.... have a robot turn the amps knobs for you.... setup a bunch of Reaper scripts or Python control to build a system where you can loop record and label audio with the relevant names and meta-data that match the takes and the amp settings.... and this means you capture enough reamped recordings through the amp at each of those settings... without having to babysit the system too much... walk away and come back a day later and you've got all your recordings in order to train your model.... There's definitely time that can be saved here by automating the entire system.

But the time you save there, you're gonna spend on training the model because you have a lot more data to account for???

What I don't quite get is how you plug in all of that data into a machine learning engine that can produce a model that incorporates all of the data.
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.

If they got all of that right, they could theoretically do full amp modeling with no human interaction, though they'd presumably want to QA to look for anomolies.
 
Oh! It's 30 folders in total.
Those will go fast. They're clearly intended for broad strokes like [Band Name] or [2024] or [High Gain]... Beyond that, you're still going to have to rely on a personal naming convention of some kind. What a strange thing to skimp on. Directory structure and naming has to be pretty cheap from a memory perspective.

But this does touch on a different question that popped into my head the other day: how long can NDSP continue to release plugins and offer support on Quad Cortex before that starts to push against storage constraints?
 
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.

If they got all of that right, they could theoretically do full amp modeling with no human interaction, though they'd presumably want to QA to look for anomolies.

It takes about 3 minutes to capture an amp using QC. Assuming their internal tech takes the same amount of time, and let's assume that the amplifier has these knobs:
Gain
Bass
Middle
Treble
Depth
Resonance
Master

And assume you're capturing 10 positions of each knob. That would be 10,000,000 permutations. It would take approximately 57 years of continuous profiling to capture all 10 million permutations, assuming it takes 3 minutes per permutation.

If you assume 3 positions of each knob, it would take approximately 4.56 days of continuous profiling to capture all 2,187 permutations, assuming it takes 3 minutes per permutation.

So I'd think interpolation would almost certainly be necessary, even with robotic automation of knob positions in sync with a scripted recording+capturing system.
 
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.

There's some relevant discussion on the "Ten billion profiles/captures = one amp" thread, but tl;dr: that's simply not a practical approach. Even with very coarse knob settings.


Truth is, all we know about NDSP's TINA robot is that it automatically turns knobs in a clever way (skip repeated positions, reduce wear) - and this is from their own press release. How they use this data after, is anyone's guess.
 
It takes about 3 minutes to capture an amp using QC. Assuming their internal tech takes the same amount of time, and let's assume that the amplifier has these knobs:
Gain
Bass
Middle
Treble
Depth
Resonance
Master

And assume you're capturing 10 positions of each knob. That would be 10,000,000 permutations. It would take approximately 57 years of continuous profiling to capture all 10 million permutations, assuming it takes 3 minutes per permutation.

If you assume 3 positions of each knob, it would take approximately 4.56 days of continuous profiling to capture all 2,187 permutations, assuming it takes 3 minutes per permutation.

So I'd think interpolation would almost certainly be necessary, even with robotic automation of knob positions in sync with a scripted recording+capturing system.
Well, that explains the number of new amp models in their recent firmware updates. Maybe they built TINA without doing the math in advance. :rofl
 
You can read about NeuralDSP's approach here: https://arxiv.org/html/2403.08559v1

They don't use just X fixed values for each knob but a randomized sample. While this might mean that the model is not accurate at every single possible permutation, in practice it's probably in that "nobody cares" territory where you can't really notice it.
 
What's funny to me is how NDSP annonuced this TINA thing almost three months ago, only to later note that CorOS 3.1.0 is including no new amps :LOL:
It was also really weird that they made the TINA announcement at the same time as the big PCOM (finally) announcement.

"We built a massive, insanely expensive robot that can turn physical knobs so that we're better prepared to convert our software plugins to run on a different processor." Hmm.
 
Last edited:
You can read about NeuralDSP's approach here: https://arxiv.org/html/2403.08559v1

They don't use just X fixed values for each knob but a randomized sample. While this might mean that the model is not accurate at every single possible permutation, in practice it's probably in that "nobody cares" territory where you can't really notice it.
Are we talking about random "seed" values, with subsequent values chosen according to extent to which deltas have an impact? Otherwise, I can't see any value in randomizing versus using an even distribution (however coarse) across the full range. Random distributions might be better, and they might be worse... at random.

(Confession: I did not read the white paper.)

(EDIT: OK, now I've skimmed it. It reads to me like they're intentionally not using random settings, but rather using an even distribution - with whatever level of granularity is "affordable" - transversed via a "traveling salesperson" algorithm. Which is something I'd never heard of until this minute. This still fails to answer @Orvillain's point above, which is that even extremely coarse distributions of controls will yield extremely long capture times.)
 
Last edited:
Back
Top