Fractal Talk

I was sent a threatening letter by Kemper's lawyers after my patent was awarded. The risk of litigation needs to be considered.

Of all the companies to intimidate it’s weird they’d pick Fractal. NDSP and IK are much more directly competing in their space. Maybe because Fractal actually has some financial stability but not the same stable of lawyers as IK might…

I also wonder if that’s the reason Line 6 hasn’t jumped into this game yet.
 
I feel a Sprockets intervention happening very soon
dance party snl GIF by Saturday Night Live
 
I also wonder if that’s the reason Line 6 hasn’t jumped into this game yet.
I would say yes, it's Kemper's only trick and he will not share it without a fight.
But that raises the question of NDSP, did Kemper threatened them too?
 
i don’t know much about machine learning but I’m guessing the computer basically tries a whole bunch of settings in the amp model to get close and then applies the IR filter for overall EQ. I’d guess that’s where cliff may be headed. You can Still use all the features of the fractal then.

It's actually completely black box - there are no amps or settings or underlying audio models. It's a huge amount of iterations that determine how to reproduce an output based on an input - hence why something like a GPU is needed to actually solve the problem. Actually, the ML models can be used for other kinds of ML modeling and aren't even specific to audio. There aren't any post filters for EQ matching or anything like that, it's all machine "magic" that makes it work. Hardware like Kemper and QC do it differently, which is why they can do it faster and do it onboard at the cost of some accuracy.

As a side note, models like nam also reproduce history on a shortish (100 ms or so) scale, so even some stuff that qc/kemper can't do well (like sag or other short transient effects) are more reproducible at this point.
 
I wrote a bit of Tensorflow before that can cluster audio clips together. I got it to tell me whether a recording was a palm mute or a sustained note, or even a 'ghost pluck'

At a super simple idiot level, you basically derive your analysis data, normalize it, and then train a Keras model. If you then derive the same data from new audio, it'll predict what the content is.

It was really the audio equivalent of the ML Image Detection 101 stuff that everyone does to learn ML, but it was quite cool!
 
I also wonder if that’s the reason Line 6 hasn’t jumped into this game yet.
The patent is only for a specific aspect, with a very similar patent filed by Fractal before it. It's not going to be a thing to make the company not do captures, Line6 have said they just don't want to.

Also ML doesn't need to be run on a GPU, a GPU is just very, very good at doing tasks that can be highly parallelized and they also now have ML acceleration hardware onboard so they make perfect candidates for doing that stuff fast.

Kemper's lawyers are probably sending cease and desist letters without ever intending for anyone to contest it and take it to court. Just straight up scare tactics.

With Fractal's patent being approved before Kemper's, even if they figured out the same stuff without knowledge of each other, it would probably result in Kemper's patent being invalidated if taken to court.

But neither of these companies are big and probably don't want to throw a lot of money into lawyers and court battles so hopefully they find some reasonable solution.
 
It's actually completely black box - there are no amps or settings or underlying audio models. It's a huge amount of iterations that determine how to reproduce an output based on an input - hence why something like a GPU is needed to actually solve the problem. Actually, the ML models can be used for other kinds of ML modeling and aren't even specific to audio. There aren't any post filters for EQ matching or anything like that, it's all machine "magic" that makes it work. Hardware like Kemper and QC do it differently, which is why they can do it faster and do it onboard at the cost of some accuracy.

As a side note, models like nam also reproduce history on a shortish (100 ms or so) scale, so even some stuff that qc/kemper can't do well (like sag or other short transient effects) are more reproducible at this point.
Ah nice to see you came around to thinking the QC does it differently! :beer
 
Ah nice to see you came around to thinking the QC does it differently! :beer

I mean TBH there's no way I know that one way or another, but I do think some of claims have been misleading in that particular area even if not totally wrong. Whatever it does, it does it really well and really fast - just not as well as the larger GPU trainers. So my guess is there's something baked in that helps reduce the amount of work it needs to do (as opposed to training a tiny model from scratch, which wouldn't sound as good). Or they're just misrepresenting what "training" means and taking advantage of the common definition.
 
Wasn't your patent awarded first? If so, what makes them think they have the right to bitch about it at that point?
The OG Kemper patent was...I don't remember when and am not gonna go look it up. But quite a while ago. One can receive a patent for a new/improved idea, even if practicing that new idea might also require infringement of the earlier patented unimproved idea.
 
🤣 I'm not placing an order but put my name in when I was debating on my next purchase. More than happy with my III

I wonder what percentage of folks are just like you?

Frackle probably has a good idea on what the rate runs… Kind of like Airlines who know how many people actually show up to their seats.

Still a tough management problem (and yet people complain about the wait lists lol)
 
Back
Top