Atomic Tonocracy (Inc NAM support)

That Kemper patent isn't saying that the Kemper uses machine learning. It's to patent-protect profiling, whatever the method may be, machine learning being one of them.
 
I can say that I have profiled the exact same setup multiple times in a row and the results were different.
Talking about different things here. You're saying you gave it the same signal chain and the results were different, and Lysander's saying if you gave the Kemper bit-by-bit (zeros and ones) the same input, the output would be the same.
 
Talking about different things here. You're saying you gave it the same signal chain and the results were different, and Lysander's saying if you gave the Kemper bit-by-bit (zeros and ones) the same input, the output would be the same.
So, if you profile an analog pedal the results will be the same each time?

Edit: Or is there a way to prove this?
 
Last edited:
So, if you profile an analog pedal the results will be the same each time?
No. Again, talking about different things.

Record a signal through an analog pedal, save it as a .wav file. Then do it again. They sound the same.
But open both .wav files in a text editor to see and compare the PCM audio data inside. They're not going to be bit-by-bit identical.
 
No. Again, talking about different things.

Record a signal through an analog pedal, save it as a .wav file. Then do it again. They sound the same.
But open both .wav files in a text editor to see and compare the PCM audio data inside. They're not going to be bit-by-bit identical.
How do we prove it then?
 
How do we prove it then?
Exactly.

I read comments stating that Kemper doesn't use ML. The arguments are:

- The patent doesn't say it uses ML, just protects the use of it. That's correct. Just the same way it doesn't say it uses a deterministic method... Just protects the use of it. So...

- The "fact" that ML would imply more power resources. Well, I think not all ML algorithms use the same amount of power. And, Kemper seems to be less accurate than NAM or Tonex (actually it's been proved in null tests)... So, why a less precise ML couldn't run in less powerful machines?

I've been speaking this afternoon with the guy that created NeuralPi (that guy knows what he's talking about), and he told me QC and Headrush uses AI. He wasn't sure about Kemper.

Not wanting to look unrespectful, but the statements in this thread doesn't seem strong enough to me. I've not seen good arguments that can convince me about Kemper not using ML. For me, unless otherwise proven, he's the pioneer in this kind of technology (applied to guitar amp simulations).
 
How do we prove it then?
Processing power.
Exactly.

I read comments stating that Kemper doesn't use ML. The arguments are:

- The patent doesn't say it uses ML, just protects the use of it. That's correct. Just the same way it doesn't say it uses a deterministic method... Just protects the use of it. So...

- The "fact" that ML would imply more power resources. Well, I think not all ML algorithms use the same amount of power. And, Kemper seems to be less accurate than NAM or Tonex (actually it's been proved in null tests)... So, why a less precise ML couldn't run in less powerful machines?

I've been speaking this afternoon with the guy that created NeuralPi (that guy knows what he's talking about), and he told me QC and Headrush uses AI. He wasn't sure about Kemper.

Not wanting to look unrespectful, but the statements in this thread doesn't seem strong enough to me. I've not seen good arguments that can convince me about Kemper not using ML. For me, unless otherwise proven, he's the pioneer in this kind of technology (applied to guitar amp simulations).
If you want to use ML as a trendy marketing term to call any sort of algorithmic solution, sure, it’s a shiiitty ML (great algorithm). Otherwise, no, and Lysander’s already explained why.
 
First of all, just clarify that I'm by no means an expert in computational methods or whatever. I'm just trying to know the truth. I don't care if Kemper used ML or not, but I'm very interested in knowing the truth and also don't want to "just believe" the word of someone saying something... Unless it's documented to be truth, in that case I'll be happy to learn the right version:

Here is a screenshot of a patent from Kemper (hope it can be seen... I apologize if it can't):

View attachment 13070
Just because a patent issued to Christoph Kemper says "said sound converts by means of machine learning" does not mean that Christoph has put any machine learning in the Kemper device he sells.
 
I remember when all my favorite guitarists argued about machine learning and workflow...





:unsure:
Im Sorry Bad Guy GIF by Apple TV+
 
I ca

I can say that I have profiled the exact same setup multiple times in a row and the results were different.

That Kemper patent isn't saying that the Kemper uses machine learning. It's to patent-protect profiling, whatever the method may be, machine learning being one of them.

Talking about different things here. You're saying you gave it the same signal chain and the results were different, and Lysander's saying if you gave the Kemper bit-by-bit (zeros and ones) the same input, the output would be the same.

Just as a pure side note of personal interest.

The KPA is now +12 years old ... yet in all this time .... apart from some "musings" / "hypotheticals" by another company back at the start that were refuted directly by C.K .... I've yet to read anyone explain precisely, and objectively-correctly how it does what it does .... always intrigued me.

That's all :)
 
Last edited:
So, if you profile an analog pedal the results will be the same each time?

Edit: Or is there a way to prove this?
No. It is like the old saying a man can never cross the same river twice: The man is different, the river is different. At the atomic level, what is going on inside transistors and inside a microphone diaphram and circuitry/transformers is not reproducible exactly so you will get some level of variation.
 
No. It is like the old saying a man can never cross the same river twice: The man is different, the river is different. At the atomic level, what is going on inside transistors and inside a microphone diaphram and circuitry/transformers is not reproducible exactly so you will get some level of variation.
From a literal perspective, sure.

And from a literal perspective, feeding a ML system the exact data twice should output the same response both times as it is building from the same system.
 
And from a literal perspective, feeding a ML system the exact data twice should output the same response both times as it is building from the same system.

No - and by definition, pretty much. ML models for audio in particular are inherently stochastic.

PS: don't even know why y'all seem to care so much about this topic :LOL: ML or algorithmic, we've long had excellent, accurate profiling alternatives in the market.
 
Last edited:
No - by definition, pretty much. ML models for audio in particular are inherently stochastic.

PS: don't even know why y'all seem to care so much about this topic :LOL: ML or algorithmic, we've long had excellent, accurate profiling alternatives in the market.

For me, I find the Kemper algorithm to have a random aspect to it. Of course, if this isn't true I would love to have proof.

As far as I know, ML is an algorithm also.
 
Last edited:
From a literal perspective, sure.

And from a literal perspective, feeding a ML system the exact data twice should output the same response both times as it is building from the same system.
I guess it depends on the initial conditions. There is a chaotic element to anything involving matter. You know the old thing about how a hurricane might be the result of a butterfly flapping its wings in the amazon. This is why weather forcasting becomes progressively worse (ie less accurate) as time advances. A tiny difference in initial conditions can lead to large divergences as the system being modelled advances.

The atmospheric weather system is just a massive 3D grid with where you have pressure, temperature and wind speed measurements at many points. Using standard equations (with probably some tweaks) you can run the model forward in time in stepwise fashion on a supercomputer and the result after each interval of time is the direct result of the conditions at the previous time point. After many iterations, the results can diverge wildly if the initial conditions are even slightly changed.
 
Just as a pure side note of personal interest.

The KPA is now +12 years old ... yet in all this time .... apart from some "musings" / "hypotheticals" by another company back at the start that were refuted directly by C.K .... I've yet to read anyone explain precisely, and objectively-correctly how it does what it does .... always intrigued me.

That's all :)
What are you talking about? CC's explained that the Kemper works by a Wiener-Hammerstein model, and CK's never refuted that. The first Kemper patent basically tells you CC's assertion is correct by describing a filter-nonlinearity-filter model.
 
What are you talking about? CC's explained that the Kemper works by a Wiener-Hammerstein model, and CK's never refuted that. The first Kemper patent basically tells you CC's assertion is correct by describing a filter-nonlinearity-filter model.
OK ... hey ... news to me ..... what is a Wiener-Hammerstein ?
 
Back
Top