The Latency Thread?

Yeah, but "differentiation" is the keyword here: you need to sources at play. Comb filtering emerges from phase interactions on two identical signals propagating, and merging back, at different speeds.

Not what people refer to when discussing latency,

It's at least sort of related, if on a different "layer". Such as in the way you feel your pick hitting your strings vs. the sound hitting your eadrum. That's two signals you "differentiate" between. Just that it's something else but phasing.
 
I will confess though that when I run my Helix in 4-cable-method, I do feel a slight "thing" ... not sure if it is latency, or tonal artifacts from two AD/DA conversions. Never tested it really. I just know that I prefer to run my Helix right up front of the amps and not in 4-cm.

Could pretty much be latency - even if there's just around 2ms added.

As an example, something I detected more or less accidentally (and it's in fact something I'm not particularly happy about for a number of reasons): At least on a "good day" and when playing through headphones, I can pretty much reliably detect the difference between running my interface at 32 vs. 64 samples. That's 4.6ms vs. 6.0ms (measured with Oblique's RTL tool, so these are absolutely accurate numbers). And yes, I was like "WTF?" when I found out about it. I'm not Mr. In-The-Pocket by any means.
Thing is, I usually get along fine with values around or below 10ms (using headphones that is). Perhaps even more. And I'm sure I had way more on my cans in the first days of modeling, playing V1 PODs (the Blackface had really noticeable latency) through a first generation Aviom system (those came with >4ms on their own) and possibly whatever stuff added by earlier digital consoles.
So, being able to detect very small differences at very low base levels came in as a surprise. And not a pleasant one, tbh. Because, frankly, I just don't know why that is. Something seems to feel different but I can't put my finger on. Maybe it's just the very range that I'm switching my references in. From "what I hear and how I pick is the same" to "it's somewhat different". But do I compensate? And what's the new reference then? What I'm listening to or my pick feel? I absolutely can't tell. Once latencies get a lot bigger, I usually manage to concentrate on either the feel of my pick or the sound coming out of monitors/phones. But at those low latencies, I clearly ain't. So, as said, I have no idea why that is - and I'm not too happy with it, especially as that's in the ballpark of overall latencies when nesting digital modeling devices, something I'd rather not want to think about.
 
Last edited:
Could pretty much be latency - even if there's just around 2ms added.

As an example, something I detected more or less accidentally (and it's in fact something I'm not particularly happy about for a number of reasons): At least on a "good day" and when playing through headphones, I can pretty much reliably detect the difference between running my interface at 32 vs. 64 samples. That's 4.6ms vs. 6.0ms (measured with Oblique's RTL tool, so these are absolutely accurate numbers). And yes, I was like "WTF?" when I found out about it. I'm not Mr. In-The-Pocket by any means.
Thing is, I usually get along fine with values around or below 10ms (using headphones that is). Perhaps even more. And I'm sure I had way more on my cans in the first days of modeling, playing V1 PODs (the Blackface had really noticeable latency) through a first generation Aviom system (those came with >4ms on their own) and possibly whatever stuff added by earlier digital consoles.
So, being able to detect very small differences at very low base levels came at a surprise. And not a pleasant one, tbh. Because, frankly, I just don't know why that is. Something seems to feel different but I can't put my finger on. Maybe it's just the very range that I'm switching my references in. From "what I hear and how I pick is the same" to "it's somewhat different". But do I compensate? And what's the new reference then? What I'm listening to or my pick feel? I absolutely can't tell. Once latencies get a lot bigger, I usually manage to concentrate on either the feel of my pick or the sound coming out of monitors/phones. But at those low latencies, I clearly ain't. So, as said, I have no idea why that is - and I'm not too happy with it, especially as that's in the ballpark of overall latencies when nesting digital modeling devices, something I'd rather not want to think about.

Latency was the entire reason I sold my Apollo x8p. For the money, the latency performance was pretty woeful tbh. The Presonus Quantum is MUCH better.
 
There's always lots of verbiage in threads like this, and there's meaningful content interspersed with irrelevant and/or incorrect statements. I'll itemize the basics here for reference.

1. Latency is undetectable whenever a listener is only exposed to the output of a system. That's a Really Good Thing, too, since the total latency when we listen to a recording can now exceed 100 years.

2. When a signal is mixed with a delayed version of itself, there will be multisource inferference. For example, when a signal mixed with the same signal delayed by .5ms (note the decimal point), there will be a broad magnitude null centered at 1kHz. This is painfully audible, but the ability to hear it has nothing to do with your ability to detect latency.

3. Many purely acoustic instruments have substantial latency. A piano can have latency - defined by the time it takes from the player keying a note until the hammer strikes the string - of as much as 200ms and seldom less than ca. 20ms. See: Touch and temporal behavior of grand piano actions. A pipe organ can have latencies of multiple seconds.

4. If you're singing or playing a wind instrument or horn and monitoring yourself with cans or IEMs, there will always be mixing of the sort described in 2. above: you hear your voice or instrument via bone conduction as well as though your cans or IEMs. Any latency in one path relative to the other will cause multisource interference.

5. When a player initiates an event with physical movement - plucking a guitar string, pressing a piano key, drawing a bow across violin strings, etc. - they will experience latency viscerally: some amount of time elapses between the stimulus (physical action) and the response (audible sound). If that amount of time is sufficient, a player will be able to detect it. If you can hear both acoustic and amplified sound from your instrument, there will, in addition to latency, be the multisource interference described in 2. above.

While relatively small amounts of latency may not by themselves cause a problem, the addition of a small amount of latency to a system that already has latency may push a given player beyond a perceptual threshold and become noticable. 10ms may not be a problem for a given individual, but 12-13ms could well be. That by itself is a strong argument in favor of quantifying every contribution to latency in a signal processing/recording/monitoring system.
 
Last edited:
3. Many purely acoustic instruments have substantial latency. A piano can have latency - defined by the time it takes from the player keying a note until the hammer strikes the string - of as much as 200ms and seldom less than ca. 20ms.
This is true. I sampled a Yamaha Baby Grand for ROLI when I was there, back in 2018 before the world went to shit. Not only are grand pianos 'variably latent' (higher notes don't seem to suffer as much from this issue versus lower notes) but they're also actually not all that dynamic. Certain areas of the pitch range - again, mostly lower octaves - don't respond to soft touches of the keys.

Mic the thing up, and give the player headphones, and you introduce even more latency through the record+playback system.

This got me to realise that what we're all playing in a lot of these sample libraries we use, is quite often not realistic, but as long as they seem real, we are okay with it.

Latency is a bit like that too. There's a sort of macro window that all humans generally fall into, in terms of acceptability, and then there are all these micro windows where certain experts of particular domains fall into. Drummers seem to be the most pernickity, even more than guitarists.

Drums are actually my domain speciality, and from tests we've done, most drummers get annoyed by entire system RTL latencies of ±20ms. That pool of drummers drastically starts to fall as you approach ±10ms - with roughly changes of 2ms being the deciding factor in a lot of cases.

The work of an electronic drumkit necessarily involves some amount of latency built into the system, until we move away from piezo based technlogy, that probably won't change.

Something else that is a bugbear of mine... you play a real hihat and open the pedal a smidge after hitting it, it will still sound like an 'open' hat... a lot of e-kits don't do this very well, and a lot of desktop software doesn't either.
 
It is phase misalignment caused by latency.
Or, latency you become aware of due to phase issues.

No. It is not.

Apologies about being so stubbornly pedantic about this topic, but i see people confusing phase/pitch issues with "latency" all the time in these forums, and the PTSD from my days as a EE student just kicks in. Latency (as we think about it in audio) has nothing to do with phase misalignment. Zero. Nada. Nichts.
 
Of course it is. The kicker is that, when you apply signal A a delay to turn it into signal B, they are no longer the same signal. And then they reach your ears at the exact same time: there's zero latency at play.

I mentioned an example a while ago which, while not perfect, better illustrates latency as a phenomenon related to absolute, not relative timing: open Google, type "metronome" and hear it ticking at 100bpm for a while. Clap along with it. Now switch to 101bpm. Can you tell a difference? Could you, if someone else was setting the tempo without you knowing?

Because the difference between 100 and 101bpm is an extra ~6ms.

I think what better describes your particular example is a relativeistic comparison in absentcia…

Fancy words meaning that much like non-perfect pitch people … if you don’t have a good representative pitch in your head, (or a good sense of time re: metronome), you would never be able to distinguish between the two different pitches or tempos sans reference.

A really good example of this is an artist looking at the wall; there’s all kind of lighting changes and colors - and copying those colors exactly onto a canvas precisely is literally impossible.

Photorealism doesn’t happen with relativistic color comparisons by the human eye.

Vermeer figured out how to do this using a mirror in the late 15th century… so he had a direct comparison on the canvas.

Tim’s Vermeer is a terrific movie that goes into great detail on how this was achieved. Done by Penn and Teller
 
I always found funny how the same people obsessed by latency tend to be the same who'll build a pedalboard with 5 digital stompboxes, just to plug it on a tube amp.
I always found funny how the same people obsessed by latency tend to be the same who'll build a pedalboard with 5 digital stompboxes, just to plug it on a tube amp.
I think many, like me. were traumatized by early units and also by later budget units. They just didn’t have the overall horsepower/algorithm/IR prossessing/memory/and more things up to what was needed to deliver a device without those compromises. In most cases, they couldn’t. Either to hit a price point or a performance level with the available affordable technology. As such some users became ultra-sensitized to the “feel/immediacy“ different from their small practice amp up to a stack and from these devices.

The discussion has been robust, subjective and passionate. With a great lot of ideas and perspectives being expressed. I have a feeling those concerned should note it’s become a marketing move to quote the latency metric In product specs.

They *know* this is on the radar for buyers. Nature will take it’s course and we should be golden in a product cycle or, most likely, two.

EDIT: I haven’t read the thread so please excuse me if I’m rehashing points made. Just wanted to toss in my two cents.
 
Last edited:
Seems like there are a few definitions of latency going on here. I'm speaking of the time between when a sound goes into a black box and when it comes out of the black box.

This most certainly can cause phase issues. Its exactly how you make a comb filter
 
Seems like there are a few definitions of latency going on here. I'm speaking of the time between when a sound goes into a black box and when it comes out of the black box.

This most certainly can cause phase issues. Its exactly how you make a comb filter

No, you need two signals for a comb filter: the original, and a delayed copy. Latency alone (audio, in particular) will not get you there.
 
Last edited:
No, you need two signals for a comb filter: the original, and a delayed copy. Latency alone (audio, in particular) will not get you there.
yes, you have the direct signal acoustically traveling thru the air and the delayed signal going through the black box. That 13 -20 msec latency of early budget interfaces was exactly about perfect for making drummers think their kicks and toms were disappearing as they played them for instance
 
yes, you have the direct signal acoustically traveling thru the air and the delayed signal going through the black box. That 13 -20 msec latency of early budget interfaces was exactly about perfect for making drummers think their kicks and toms were disappearing as they played them for instance

Look, it doesn't matter. The "latency" of a comb-filter arrangement is zero, regardless of how long a delay you use in it. Whatever propagation delay you experience (acoustic, in this case), happens on top of it. Or, to be more technical: the behavior a comb filter depends on the relative propagation delays of two otherwise identical signals. And the latency of the whole thing will be dictated by the fastest one alone.

But yes, comb effects were particularly jarring with older digital gear. It's one of those things that are still a bane of live mixing today, along with vanilla acoustic propagation delays. For large enough venues there's even the concept of "negative latency", where your wedge/IEM will play you back signals before you might expect to hear them - for example, if your amp is too far away.
 
Last edited:
Look it does matter. We're measuring latency in this thread.

No, the signals don't have to otherwise be identical, see two mics on a guitar cab.

Please, you are derailing this thread with some bizarre need to interject who knows what
 
Fwiw, all latency, regardless whether it's phasing, comb filtering, "real" delays (as in featiuring higher values) or tactile issues, only exists because of time misalignments. They're all just different sides of pretty much the same story. And sometimes they come combined. When a singer has monitoring issues while tracking, is that tactile latency? Or is it just the comb filtering between the "skull signal" and the headphone signal?
What I'm saying is that it's a good idea to be aware of all of them.

And oh, we haven't even talked about human latency compensation yet. And how it is quite a lot different whether we compensate for hardware or plugin latency in a tracking situation.
 
For large enough venues there's even the concept of "negative latency", where your wedge/IEM will play you back signals before you might expect to hear them - for example, if your amp is too far away.
This has been a thing as long as there've been floor wedges on larger stages. I experienced it ca. 1978, and I can say from experience that it takes a bit of getting used to.
 
Back
Top