The Kemper invention is adjusting a number of parameters of an underlying amp model until the error signal is a minimum. You feed the output of the underlying model and the output of the DUT (Device Under Test) to a difference block and generate an error signal. You then adjust various parameters to minimize the error. Presumably you would use techniques like gradient descent. The hard part is determining what to measure to generate the error and devising test signals to measure that.
Machine Learning is not an infringement of that patent. It trains a neural network which is, by definition, not parametric.
Now, whether a certain device actually uses AI/ML/NN is something only the developers of said device know. However some interesting data points are available:
- It takes about 20 minutes, for example, to train ToneX using a modern GPU.
- It takes over three hours to train it using a modern CPU.
- Training time for NAM is "a couple hours" using a modern GPU.
Now the device being discussed is somehow able to train a neural network in a minute or two using a low-power DSP with no vector, AI or multi-media instructions. If you were to use this DSP to train a ToneX or NAM model, assuming it even has enough addressable memory, I would estimate training times on a scale of days. Interestingly the test tones generated by said device are remarkably similar to those used by the Kemper.