Not at all. If you have two IR's running in parallel, with no time-aligning or volume normalization going on - ie; maintaining the original capture as it was - you should get the same sound as recording the two mics to their own tracks. I can't think of a reason why you'd lose anything. The tone of both mics being played together comes from how they sum and phase cancel across the frequency spectrum. The same thing happens when using two IR's.
The thing that breaks this is the normalization and alignment post-processing.
Yeah, and unfortunately as ever, the Apple-ification of the human mind, the universe and everything, continues to afflict us until our dying days. Probably.
You *can* adjust the alignment page 'distance' parameter on Axe III, and the 'delay' parameter on Helix, and more or less emulate the relationship. But you need to have Swirly's graph to hand all of the time, and his figures are close enough approximations from what I can gather.
If Line 6 and Fractal added a way to link their delay/distance parameters to the movement of the microphone, and weight the values so that they're realistic to what would happen in the real world... then that would solve it. But I don't see that happening.
Neural don't give you a parameter to control, so their cab block will NEVER sound realistic when using multiple microphones.