there are insane levels of very audible aliasing (?) -or- maybe both (?)
This is one of the downsides of neural network based amps.
Neural networks learn a mapping from input to output audio without understanding the underlying signal processing or physics.
Aliasing happens when a signal contains frequencies higher than half the sample rate (the Nyquist limit), and those high frequencies get "folded" back into the audible range as false, harsh-sounding tones.
Unlike traditional amp models that explicitly oversample or apply anti-aliasing filters when generating those harmonics, ML models usually don’t know they’re creating frequencies that will alias. They just reproduce whatever they saw in the training data. If the model was trained at 44.1kHz and you ask it to process a signal at 44.1kHz, it may produce aliased harmonics with no protection. The network doesn't inherently understand or filter anything unless it's been built or trained to do so.
If you
increase the DAW sample rate (say from 44.1kHz to 96kHz), you give the model more “headroom” before aliasing occurs. That means the same high-frequency harmonics that would alias at 44.1kHz might now be preserved cleanly at 96kHz. This
can reduce aliasing, even if the model itself isn’t oversampling—it just happens because the Nyquist limit is now higher.
But if the ML model was
trained at a fixed sample rate (say 44.1kHz) and you feed it audio at 96kHz without adapting or retraining it, the model may behave unpredictably. Some models are sensitive to sample rate, and the learned behavior might not generalize well. You could get
incorrect tone,
broken dynamics, or even
worse aliasing, depending on how the model processes time and frequency.
And yes, that video sounds pretty shit to me.