It depends on the source. As mentioned, some sources (clean, mid-gain, and high-gain) are easy to nail, others (clean, mid-gain, and high-gain) are more difficult and require more DSP. But it also depends on the listener. Put a Brayden in our studio (99.5% of users) and they'll fail A/B/X every time. Put a Stan in our studio (0.5% of users), they'll fail A/B/X
most of the time.
The problem is that 100% of users on gear forums think they're a Stan.
No. In certain cases with certain models, modeling (as well as ML-based capture tech like NAM) can be effectively
identical to the original tube amp. Expensive lab measurement and null tests do not lie; however,
our stupid ears lie to us all the time.
When I say a particularly squirrely amp might (???) require upwards of 300% more DSP in the future, that doesn't mean Helix Core 3.80 gets us 33% of the way there. It might get us 96-97% of the way there right now, and that extra DSP may be required to get us to 99.9%. But it may not matter because almost everyone who complains about modeling isn't complaining about the missing 3-4%—they're complaining that modeling can't make their cheap PA speaker behave like a 4x12, or that a hyper-accurate null-test-passing model doesn't sound like what they
remember it sounding like on their favorite record. They just say modeling is inferior in nebulous, hand-wavey ways, and you can hear aaalllll about it on their YouTube channel. Don't forget to like and subscribe!
There's no appreciable deficiency in modeling technology, there's an appreciable deficiency in understanding and context. But people reaaaaalllly don't like hearing that because it's easier to blame the tools than it is to learn how to use those tools in a manner that gets them the results they want.