Mooer Pitchbox - 19 msec of latency?

I’m sure that would help, didn’t think of that. I work in the CG visual arts and there’s a lot happening in the space where the AI can “fill in the blanks” or “look ahead” or “do tedious time consuming building processes very fast” like building CG character rigs or whole miles long landscapes or enabling real-time effects in video creation etc. I am sure audio is a different beast but maybe it can help with latency issues.
Those are honestly best case situations. They don't require latency-critical performance. Could someone make a more accurate, machine learning pitch shifter? Probably and maybe they will, but it will not help with latency as Cliff explained. It might just have less artifacts.
 
I think this is why Roland did the divided pickup for the SY-1000. While the lower models did an admirable job of detecting pitch and doing the synth thing, it’s still faster and more accurate when you’ve got each string isolated via a divided pickup.
 
But what if AI got so good, it was able to interface with our brain, tap into those neural pathways, so it would know ahead of time what we planned to play, just enough so it could alter those pitches, as they were played?

We need to think outside the box with this stuff...! Lol(?)
 
I don't think anyone was under the impression that pitch shifting was a zero latency proposition.

The title of this thread is more about the 19msec of it

ESPECIALLY given the latency purists around here and TGP that slather over the Digitech Drop which possibly has similar latency
 
Ai wouldn't need to read our brains to predict what we'll play next. It'd be possible to make a guess of what's coming next if it had enough data to work with and synthesise what it assumed the right signal is going to be. Then it could be comparing that to the actual signal as it happens - if it's assumed an e and we're a third of the way through a wave that's the shape it expected, all is good.

I can imagine that method being better until it makes a mistake, then the glitch being worse.
 
I just had my last post's AI idea valued at $7.3million by the way, you guys better show me some respect.
 
Ai wouldn't need to read our brains to predict what we'll play next. It'd be possible to make a guess of what's coming next if it had enough data to work with and synthesise what it assumed the right signal is going to be. Then it could be comparing that to the actual signal as it happens - if it's assumed an e and we're a third of the way through a wave that's the shape it expected, all is good.

I can imagine that method being better until it makes a mistake, then the glitch being worse.

A predictive engine based on an audio signal wouldn't necessarily need to employ AI. Honestly, something like that may be able to slightly bring down latency but the accuracy would be much worse as well.

The AI implementation would be to train it on a specific player's style so it has the ability to predict in real time the next note a player is going to play with a high level of accuracy. Of course, once we have that level of compute in a portable enough formate to implement then the question becomes whether the actual guitarist is really that relevant any more...
 
I've not encountered any plugin with polyphonic pitchshifting faster than what's currently available in hardware. It may seem like there's no latency, but that's because the plugin reports latency to your DAW, which in turn delays everything else to compensate.

Even something crazy like Znaptiq's Pitchmap—where you can RE-CHORD entire band mixes in real*ish* time via MIDI input (!!!)—will drive Steve Vai up the wall.
 
It takes about 12ms for the low E on a guitar to make a full wave. That's the minimum time (latency) to detect that pitch based purely on the math/physics.

Realize that pitch detection is not the same as pitch shifting ... and there are some interesting tricks, caveats, and pitfalls.
 
This is from another thread here, comparing Helix's Poly Capo X Fast vs X stable modes. I am not really sure how to measure the latency here as both of them come to just under 1 msec to the first peak. I did this same experiment with the hardware vs a direct signal and got around the same results. After subtracting the converter latency from both, I got the same distance to the first peak for X Fast and X Stable

How am I supposed to measure the latency...As shown in this thread, the Mooer had a MUCH longer and more obvious distinction to the first peak

1694624304119.png
 
Back
Top