hmm I thought I set the speaker compression thing to 0 but i’ll double check.Is it me or is there some extra compression happening on the Toneocracy one that isn't happening with the original amp, NAM, and ToneX ???
There is a noticeable shift in volume when those palm mutes kick in in the Toneocracy clip????
hmm I thought I set the speaker compression thing to 0 but i’ll double check.
EDIT: Yep, Speaker compression was at the default value. Good catch - turned it off and re-uploaded:
A properly captured model shouldn't need any additional speaker compression IMO. 99% of the time its doing more damage than good, for my tastes.
Hey... How is that? Kemper, QC and Headrush cloning use machine learning for their "profiles". NAM just borrowed the concept and made it free (just as GuitarML or Mooer MNRS).Keep in mind that the Quad Cortex, Kemper, Headrush Prime et al are not AI-based solutions - no matter how much certain companies would like you to think otherwise.
No they don't.Kemper, QC and Headrush cloning use machine learning for their "profiles"
Hey... How is that? Kemper, QC and Headrush cloning use machine learning for their "profiles".
NAM just borrowed the concept and made it free
Considering nowadays statistical modeling such as Linear Regression is marketed as "Machine Learning" I would be wary of any process that claims to be "Machine Learning"Any profiler/capture which can be trained on the same device in a few minutes is NOT doing machine learning.
Can I ask how do you know this?They're not. Kemper, QC, Headrush and Mooer all do different variants of the same concept, which is tone-matching a set of audio blocks including a waveshaper. This can yield surprisingly accurate results, and it's relatively cheap to process - meaning, it's well suited for DSPs used on consumer gear, including the capture process.
Any profiler/capture which can be trained on the same device in a few minutes is NOT doing machine learning.
No, NAM is fully AI-based - WaveNet, if i recall correctly. These models are very computationally intensive to run, and even more to train; there's simply no way a QC or Headrush can run multiple capture and FX blocks on the DSP hardware they use.
Kemper and QC captures have been proven to be much less accurate than NAM ones. I don't find rare than simpler captures could be done in less capable devices.Considering nowadays statistical modeling such as Linear Regression is marketed as "Machine Learning" I would be wary of any process that claims to be "Machine Learning"
Can I ask how do you know this?
Kemper patent clearly states machine learning.
My point is more to do with that in the data world they have a habit of rehashing existing processes with the latest buzz words in order to create excitement and when you cut through the rubbish you will find it is the same concept that people have been doing for years.Kemper and QC captures have been proven to be much less accurate than NAM ones. I don't find rare than simpler captures could be done in less capable devices.
Yeah, I don't know, I'm just saying that, in Kemper's patent, one of the claims is the use of machine learning. I don't have the patent, but I downloaded it when I wanted to read it.Mostly based on Kemper's patents - f.ex. https://patents.justia.com/patent/11463057 . @FractalAudio wrote about this topic multiple times on the FAS forums, iirc.
Interesting - have an example handy? Every Kemper patent i found revolve around the concept of algorithmically "adapting" sound converters.
And just asked how do you know it so sure to state that clearly that those brands don't use machine learning. Genuine question... Maybe you've seen the code or whatever.
That depends on the definition of Machine Learning and it also depends on the algorithm used. Some algorithms can be very simple but very intensive and some the other way around.Again, mostly from patents, but you don't even need the technical expertise for those: if a DSP-based device can train captures/profiles in device in a very short period, it is simply not possible for it to be doing ML.
First of all, just clarify that I'm by no means an expert in computational methods or whatever. I'm just trying to know the truth. I don't care if Kemper used ML or not, but I'm very interested in knowing the truth and also don't want to "just believe" the word of someone saying something... Unless it's documented to be truth, in that case I'll be happy to learn the right version:Again, mostly from patents, but you don't even need the technical expertise for those: if a DSP-based device can train captures/profiles in device in a very short period, it is simply not possible for it to be doing ML.
That depends on the definition of Machine Learning and it also depends on the algorithm used. Some algorithms can be very simple but very intensive and some the other way around.
Here is a screenshot of a patent from Kemper (hope it can be seen... I apologize if it can't):
Iterates is similar to specifying Epochs ie how many times it will run the model through the training data.Yeah, this seems to be a loose definition of "machine learning". Just for clearness sake, when i mentioned ML above i was doing it in the context of it being a subset of AI.
Kemper in particular iterates tweaking settings for their audio blocks until the difference with the source signal falls below a threshold. No AI involved.
Iterates is similar to specifying Epochs ie how many times it will run the model through the training data.
I can say that I have profiled the exact same setup multiple times in a row and the results were different.No, not really. Without getting overly technical, the main difference is that Kemper's process is 100% deterministic - meaning, if you run the exact same input twice you'll get the exact same profile out. There's no NN, weights nor training involved of any kind, but just a delta reduction algorithm.