It's not impossible but it has implementation difficulties. The main problem is that the amp block is nonlinear and therefore oversamples the data. Any effect inserted between the virtual preamp and power amp would need to also run at the oversampled rate which means many times the CPU usage. For example, if the amp block is running 8x oversampled then the CPU usage for any effect inserted would by 8x as much (I'm not going to disclose our actual oversample rate).
The other way is to dowsample back to native sample rate, run the effect(s), and the upsample again. No problem right? Except the no-free-lunch theory gets in the way. Downsampling and upsampling add latency.