NDSP Quad Cortex

Seems a little lean, but I’m permanently spoiled by Line 6, even years later.

I’d honestly be more interested in more QoL improvements/ bug fixes (depending on your point of view.) Example: make the undo/redo buttons work with the f***ing block edit screen open. Make the looper switches fire properly when you’re editing looper params. Things like that matter more to me than an update to an amp that I may or may not even be able to hear - especially since I can grab a capture if a given model is really not doing it for me.
Neural be like... here you go `next major update`
Oscar Peterson Smile GIF by Jazz Memes


And this is me..
Angry Fed Up GIF
 
Just spent a few minutes this morning cleaning up and sorting my captures into folders, introduced in 3.0. Was quick and painless.
I’m working up the courage to do a full factory reset and start from scratch. Would probably want to identify a few key presets and upload those to the cloud first. But as things stand, after using the QC for ~3 years, I still have trouble searching for captures on the device.

Well, no trouble searching; just not much luck finding. ;)
 
I’m working up the courage to do a full factory reset and start from scratch. Would probably want to identify a few key presets and upload those to the cloud first. But as things stand, after using the QC for ~3 years, I still have trouble searching for captures on the device.

Well, no trouble searching; just not much luck finding. ;)
I still have to do IRs. That's the real scary part.
 
Dunno. I've only made 7 so far. 30 seems like overkill to me.
Hmmm. Not sure really. I've got 9 amps here I want to capture, and wanted a folder for the amp.. then two sub folders for amp-only and amp+cab options. So that right there is 27 folders!
 
Hmmm. Not sure really. I've got 9 amps here I want to capture, and wanted a folder for the amp.. then two sub folders for amp-only and amp+cab options. So that right there is 27 folders!
I think you can have up to 30 primary folders. Not 30 folder altogether. Wait a sec. I'll test.

Also, didn't even notice there were sub-folders 'til you said it not. I'm going to organise a little bit more.
 
What I don't really get is NDSP make a big deal about their TINA system, and how quickly they can churn out models, but this doesn't seem to be reflected in their firmware updates.

There's nothing to get because it makes no sense whatsoever. Another unforced communication error where they - under no public pressure whatsoever - brag about something that they know is inaccurate and then contradict themselves further in subsequent interviews.

There's never any accountability or consequences, so why wouldn't they keep doing it?
 
What I don't really get is NDSP make a big deal about their TINA system, and how quickly they can churn out models, but this doesn't seem to be reflected in their firmware updates.

There's nothing to get because it makes no sense whatsoever. Another unforced communication error where they - under no public pressure whatsoever - brag about something that they know is inaccurate and then contradict themselves further in subsequent interviews.

There's never any accountability or consequences, so why wouldn't they keep doing it?
It isn't that clear to me how it would speed up model creation. I think I get the robot part of it.... have a robot turn the amps knobs for you.... setup a bunch of Reaper scripts or Python control to build a system where you can loop record and label audio with the relevant names and meta-data that match the takes and the amp settings.... and this means you capture enough reamped recordings through the amp at each of those settings... without having to babysit the system too much... walk away and come back a day later and you've got all your recordings in order to train your model.... There's definitely time that can be saved here by automating the entire system.

But the time you save there, you're gonna spend on training the model because you have a lot more data to account for???

What I don't quite get is how you plug in all of that data into a machine learning engine that can produce a model that incorporates all of the data.
 
It isn't that clear to me how it would speed up model creation. I think I get the robot part of it.... have a robot turn the amps knobs for you.... setup a bunch of Reaper scripts or Python control to build a system where you can loop record and label audio with the relevant names and meta-data that match the takes and the amp settings.... and this means you capture enough reamped recordings through the amp at each of those settings... without having to babysit the system too much... walk away and come back a day later and you've got all your recordings in order to train your model.... There's definitely time that can be saved here by automating the entire system.

But the time you save there, you're gonna spend on training the model because you have a lot more data to account for???

What I don't quite get is how you plug in all of that data into a machine learning engine that can produce a model that incorporates all of the data.
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.

If they got all of that right, they could theoretically do full amp modeling with no human interaction, though they'd presumably want to QA to look for anomolies.
 
Oh! It's 30 folders in total.
Those will go fast. They're clearly intended for broad strokes like [Band Name] or [2024] or [High Gain]... Beyond that, you're still going to have to rely on a personal naming convention of some kind. What a strange thing to skimp on. Directory structure and naming has to be pretty cheap from a memory perspective.

But this does touch on a different question that popped into my head the other day: how long can NDSP continue to release plugins and offer support on Quad Cortex before that starts to push against storage constraints?
 
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.

If they got all of that right, they could theoretically do full amp modeling with no human interaction, though they'd presumably want to QA to look for anomolies.

It takes about 3 minutes to capture an amp using QC. Assuming their internal tech takes the same amount of time, and let's assume that the amplifier has these knobs:
Gain
Bass
Middle
Treble
Depth
Resonance
Master

And assume you're capturing 10 positions of each knob. That would be 10,000,000 permutations. It would take approximately 57 years of continuous profiling to capture all 10 million permutations, assuming it takes 3 minutes per permutation.

If you assume 3 positions of each knob, it would take approximately 4.56 days of continuous profiling to capture all 2,187 permutations, assuming it takes 3 minutes per permutation.

So I'd think interpolation would almost certainly be necessary, even with robotic automation of knob positions in sync with a scripted recording+capturing system.
 
It was my assumption that the training process was built into the TINA software. So it would capture a device with a number of permutations of knob positions (with an awareness of those positions, and probably auto-detecting for which movements were most relevant along the way), and then burn those captures into key knob positions in the resulting model, with interpolation for intermediate points not captured.

There's some relevant discussion on the "Ten billion profiles/captures = one amp" thread, but tl;dr: that's simply not a practical approach. Even with very coarse knob settings.


Truth is, all we know about NDSP's TINA robot is that it automatically turns knobs in a clever way (skip repeated positions, reduce wear) - and this is from their own press release. How they use this data after, is anyone's guess.
 
It takes about 3 minutes to capture an amp using QC. Assuming their internal tech takes the same amount of time, and let's assume that the amplifier has these knobs:
Gain
Bass
Middle
Treble
Depth
Resonance
Master

And assume you're capturing 10 positions of each knob. That would be 10,000,000 permutations. It would take approximately 57 years of continuous profiling to capture all 10 million permutations, assuming it takes 3 minutes per permutation.

If you assume 3 positions of each knob, it would take approximately 4.56 days of continuous profiling to capture all 2,187 permutations, assuming it takes 3 minutes per permutation.

So I'd think interpolation would almost certainly be necessary, even with robotic automation of knob positions in sync with a scripted recording+capturing system.
Well, that explains the number of new amp models in their recent firmware updates. Maybe they built TINA without doing the math in advance. :rofl
 
Back
Top