NDSP Quad Cortex

Neural Capture Version 2 is an advanced evolution of Neural Capture, trained via Cortex Cloud for higher resolution and more detailed results. Instead of processing captures on Quad Cortex itself, V2 uses Cortex Control and Cortex Cloud to analyze your device with a more advanced algorithm.

(...)

Because the training happens in the cloud, V2 requires an internet connection and takes slightly longer to complete than V1 due to the higher resolution.

Interesting how they sort of took the Kemper v2 route - this is cool, but it sucks that "good" captures can no longer be created in-device, let alone without an internet connection.

Really nice update otherwise. A Micropitch model! ❤️
 
Listening to the GuitarJon video, it just seems like the cloud approach is going to inevitably beat any onboard solution by fact that there will always be way more compute available via that method.

It makes me wonder if any onboard capture process will ever a long half-life.

The thing that bothers me is that this all depends on a cloud service to exist, forever, in order for functionality to work. Kemper mk2 at least offloads profiling to your computer.

It's also something that bothers me about Helix Stadium's upcoming Proxy & stem separation. Then again, i don't have much use for captures to begin with...
 
Listening to the GuitarJon video, it just seems like the cloud approach is going to inevitably beat any onboard solution by fact that there will always be way more compute available via that method.

It makes me wonder if any onboard capture process will ever a long half-life. Also makes me wonder, if it would ever be possible to “store” the initial capture and then have the ability to reproduce a new/improved version of it when the cloud based method gets updated or improved over time.

I don’t know enough about NAM, but if it’s open source, I’d be curious if Neural (or anyone for that matter) isn’t to some degree retrofitting it, or how much of this new process is truly novel. Not that it matters much if you’re a QC owner, but I would be curious if this isn’t to some degree a NAM variant in its approach.

Does anyone know if Tonex is using a NAM-like process or is their Cloud approach wholly different?
There’s the biggest influx of capital maybe in human history for legitimate Ai tech the past 5 years. No hardware at your feet will ever be able to do what could be done when sent off to a server running even basic stuff. NAM was created with google tools meant for speech recognition and that was years ago now. It can almost perfectly replicate an amp down to -60db difference. I would assume, and this is an assumption, that Neural and/or IK have also not written their latest stuff from scratch and are likely using tools available to them in a similar manner repurposed for their usage. It’s going to be pretty hard to outdo companies worth multiple hundreds of billions of dollars and teams of thousands of the smartest people on the planet with a team of 3-4 writing your own code. This is also why I feel (despite being shot down every time I mention it) that ultimately the end game of this space will not be component modelling but entire amps made with a dataset of captures fed into these models.

Side note. I was playing around with Gemini 3 this week and uploaded a photo of my dog and told it to make a video from it. I honestly couldn’t tell that it wasn’t just a video I took. It was the first time I’ve actually been not able to distinguish Ai generated content within a few seconds. Compare that to the first video of will smith eating spaghetti 2-3 years ago and you can see the trajectory. It’s extremely impressive and at the same time just as frightening
 
There’s the biggest influx of capital maybe in human history for legitimate Ai tech the past 5 years. No hardware at your feet will ever be able to do what could be done when sent off to a server running even basic stuff. NAM was created with google tools meant for speech recognition and that was years ago now. It can almost perfectly replicate an amp down to -60db difference. I would assume, and this is an assumption, that Neural and/or IK have also not written their latest stuff from scratch and are likely using tools available to them in a similar manner repurposed for their usage.

Thanks for info. I don’t know enough about it, but I was curious if essentially all these companies are going to repurpose the same thing, but from with their ecosystem with a catchy name. (when inevitably it all ends being some NAM Variant)
 
Thanks for info. I don’t know enough about it, but I was curious if essentially all these companies are going to repurpose the same thing, but from with their ecosystem with a catchy name. (when inevitably it all ends being some NAM Variant)
I think the answer is….kinda, but not really? same result, slightly different way to arrive there. Get to the same website from multiple search engines kinda thing. Honestly I think the tech will become so accessible that hardware, support, user experience and effects will be the main deciding factor. It’s kinda already been sliding that direction with the latest gen stuff anyways. The thing about amp modelling that exists and doesn’t in other tech is that there is an end game….when you can exactly replicate the sound of the amp with ZERO discrepancies then you can’t do better unless you make new creations. We’ve already passed the point of really anyone being able to tell in a blind test but it will continue to get better until it’s replicated exactly. Then we’ll all have to argue about something else
 
Man, don't sweat it. There are just so many great options right now. Stadium is going to be amazing, and I have to admit I was even a little tempted by the FAS AM4 announcement this week. (This QC update has cured me of that, so $700 saved LOL.)
I’m not! Right now I’m using my Stomp, I plan on getting a Stadium (non XL). If the QC gets Mesa PCOM by then though…
 
The new V.2 capture sounds and feels really good! NAM like? hmmm
But.. there is no Batch process :facepalm
I hope this will come in the future
 
V2/V1 symbol difference so you can see them at a glance on the app.

1000011704.png
 
I strongly recall when Doug was active at T.O.P pre-promoting QC, he was asked several times if the OC Amp models were component modeled - each time he clearly replied along the lines of "its a mix of component modeling and capturing"

Assuming ^this^ was/is the case ... and that each new V2 Capture only takes 8 - 10 minutes .... surely it would have to be very high on their to-do list to update all the current QC Amps to V2 Standard ?

And no .... I'm not being facetious.
 
How is the v2 cloud-capture stuff handled?

My understanding is that you're limited to 10 v2 training sessions each day.

If you have, say, 20 reamps of 1 amp done in a single day, do 10 of them get processed and the other 10 queued up? Sometimes you have a limited amount of time with an amp - just curious how that's tackled if anybody's tried it.
 
Listening to the GuitarJon video, it just seems like the cloud approach is going to inevitably beat any onboard solution by fact that there will always be way more compute available via that method.
Absolutely. There's going to be a ceiling where just throwing more hardware at it won't do anything but make the capture creation faster.

It makes me wonder if any onboard capture process will ever a long half-life. Also makes me wonder, if it would ever be possible to “store” the initial capture and then have the ability to reproduce a new/improved version of it when the cloud based method gets updated or improved over time.
The test signal is likely to change so no. The QC V2 capture signal sounds totally different from the V1 sweep.
While you could just use e.g V1 test signal to run through V2 process, likely the end result would lose something compared to the new test signal.

I don’t know enough about NAM, but if it’s open source, I’d be curious if Neural (or anyone for that matter) isn’t to some degree retrofitting it, or how much of this new process is truly novel. Not that it matters much if you’re a QC owner, but I would be curious if this isn’t to some degree a NAM variant in its approach.

Does anyone know if Tonex is using a NAM-like process or is their Cloud approach wholly different?
NAM's capture process is not so complicated that others wouldn't be able to just make their own that is similar. They might use a different neural net model, with different settings etc. I'm sure they've done their own research on what they think will give the best quality to performance ratio. For cloud based solutions they will utilize the tools their cloud provider (e.g Amazon) provides.

The process for all is the same: record goal signal (test signal through your amp), train neural network to match any input signal to produce goal signal.
 
Back
Top