Quad Cortex plugin support officially goes from "soon" to "eventually"

Crazy thought for the day: cash in ToneX + HX Stomp and finally see what FM3 is all about. @Gearzilla approved?
I think if you’re into digital modeling then Fractal Audio is something that someone should try at some point. It’s good, and that way there’s a frame of reference.

For me whenever I’ve owned something from them I use Axe Edit for the lion’s share of set up. It’s really easy. Then you play. To me the editor is very straightforward, and effective.

As far as my approval goes, it goes something like this. Whatever gets someone to fiddle less and play more is my recommendation.

The strength of working with Fractal Audio is the ability to get to a tone one wants. For me that amounted to having some go to IRs with a minimum amount of adjustments(or not): boost, saturation, input level, bright switch, depth, presence, and impedance curve. Of course combined with basic amp controls.

However the modeling sounds representative of the amps even if someone doesn’t do any tweaks. It’s just that maybe you want a slightly different flavor or variations in guitars is where the variety of adjustments come in handy.

To me if you can swing an overlap while checking it out is the way to go, and after making an evaluation get rid of whatever you’re not really going to use because prolonged a/b-ing is the death of musical output by a thousand cuts.
 
Last edited:
Found this in a QC User group-

0CDCB108-4784-4117-A78A-7858F2E30C92.jpeg
 
Now, whether a certain device actually uses AI/ML/NN is something only the developers of said device know. However some interesting data points are available:
- It takes about 20 minutes, for example, to train ToneX using a modern GPU.
- It takes over three hours to train it using a modern CPU.
- Training time for NAM is "a couple hours" using a modern GPU.

Now the device being discussed is somehow able to train a neural network in a minute or two using a low-power DSP with no vector, AI or multi-media instructions. If you were to use this DSP to train a ToneX or NAM model, assuming it even has enough addressable memory, I would estimate training times on a scale of days. Interestingly the test tones generated by said device are remarkably similar to those used by the Kemper.
To be fair that's for the advanced training on these. You can use less advanced options which take less time. Apparently both Tonex and NAM are PyTorch based solutions.

From what I remember the test signals on the QC sounded very similar to what I hear from my 10+ year old Denon AVR1610 receiver's Audyssey room correction feature.
 
Now the device being discussed is somehow able to train a neural network in a minute or two using a low-power DSP with no vector, AI or multi-media instructions.

Very interesting take.

By all accounts, the device in question is supposed to profile amps via NN training, and i never understood how they managed to make this happen in ~5min, runnning on standard SHARC+ DSPs.
 
Last edited:
The Kemper invention is adjusting a number of parameters of an underlying amp model until the error signal is a minimum. You feed the output of the underlying model and the output of the DUT (Device Under Test) to a difference block and generate an error signal. You then adjust various parameters to minimize the error. Presumably you would use techniques like gradient descent. The hard part is determining what to measure to generate the error and devising test signals to measure that.

Machine Learning is not an infringement of that patent. It trains a neural network which is, by definition, not parametric.

Now, whether a certain device actually uses AI/ML/NN is something only the developers of said device know. However some interesting data points are available:
- It takes about 20 minutes, for example, to train ToneX using a modern GPU.
- It takes over three hours to train it using a modern CPU.
- Training time for NAM is "a couple hours" using a modern GPU.

Now the device being discussed is somehow able to train a neural network in a minute or two using a low-power DSP with no vector, AI or multi-media instructions. If you were to use this DSP to train a ToneX or NAM model, assuming it even has enough addressable memory, I would estimate training times on a scale of days. Interestingly the test tones generated by said device are remarkably similar to those used by the Kemper.
Interesting take.
Is it possible that Neural are just using far higher error margins?
The gain structure doesn't have to be spot on, most of the heavy lifting is done with shaping filters anyway.
 
To be fair that's for the advanced training on these. You can use less advanced options which take less time. Apparently both Tonex and NAM are PyTorch based solutions.
What a weird "to be fair" interjection lol, that changes almost nothing about the analysis.
 
not to distract from the fact that QC's UI is quite blatantly copied from Helix, calling a plugin version of something that runs on external DSP native is about the most logical thing to do, and has been done for a very long time for all kinds of software. AxeFX Native or Kemper Native wouldn't bat an eyelid. I think its fairly obvious they were always going to have a plugin version of the HW





See also - basically any plugin that used to run on TDM or AAX DSP
 
Last edited:
Back
Top