Axe III dyna-cabs VS Helix VS NeuralDSP

I just explained why - I'd rather add back in any "phasiness" than have it be there by default, because it is easier to do so.
so you have to add additional steps, and essentially guess to get a realistic experience.

and moreso - when using cabinet sims that do this, you’re going to get results that you’d never get when micing cabs up in a real studio. Why should the goal be some idealised non-physical time travelling ideal? Does it always sound better?

EDIT: even when the mic signals are algined, you're going to get a different kind of phasiness. Its not removing it, its just swapping a natural phase response to one that doesn't exist in the real world. I dont understand why thats a good thing.
 
I really can't understand why this is the default behaviour - surely it should be up to the user to determine the phase relationships. IMO the default state should be what happens in the real world, and if the user wants to do some trickery for a certain sound, then it should be an option. So much of these products is aimed at realism, and then when it comes to cabs, we all get so afraid of phase cancellations that we throw all of the realism in the bin?
Bear in mind we're audio engineers. Most people aren't. There would be complaints about latency and all sorts of whinging about "I can't get this tone dialed in" nonsense that are typical challenges we face all the time.

I just explained why - I'd rather add back in any "phasiness" than have it be there by default, because it is easier to do so.

You literally *can't* add it back in though, I think that is the ultimate point here. Not unless you record your virtual cab mics to different tracks and time delay them yourself - which most people aren't going to do.

I've got another video coming up....
 
Why should the goal be some idealised non-physical time travelling ideal? Does it always sound better?
I never said it should be, nor did I say it always sounds better - I just said it is easier to add back in "phasiness" than it is to remove it.

I also said YMMV... which it clearly does! ;)
 
I just said it is easier to add back in "phasiness" than it is to remove it
phasiness is there as soon as you blend any mics, regardless of distance being the same or not. all you can do is change it, but whether its better or not is totally case dependent.
 
You literally *can't* add it back in though, I think that is the ultimate point here. Not unless you record your virtual cab mics to different tracks and time delay them yourself - which most people aren't going to do.
Isn't that what the delay parameter on the Helix dual cab block is for?

Delay (Dual only)—Although the new cabs in 3.50 perfectly line up with one another, there may be situations where you want to delay one side very slightly, to perhaps impart a bit of phase incoherence or at higher values, to increase the apparent stereo spread. A little goes a long way here
 
Isn't that what the delay parameter on the Helix dual cab block is for?
Yes, but mult that down to a recording, and you can't change it. Whereas if I record a real SM57 and 421 to separate tracks, I can adjust the relationship on a note-by-note or even riff-by-riff basis. The power is my hands, in other words.
 
Yes, but mult that down to a recording, and you can't change it. Whereas if I record a real SM57 and 421 to separate tracks, I can adjust the relationship on a note-by-note or even riff-by-riff basis. The power is my hands, in other words.
That's the audio engineer talking again, and as you said previously most of us aren't audio engineers! ;)
 
Whereas if I record a real SM57 and 421 to separate tracks
As we've previously ascertained, I'm not an audio engineer, but I do have a question on this point:

If I were to use a dual cab block in the Helix with an SM57 and a 421, and hard panned left and right, I'd be able to record these to separate tracks - how does this differ to your example?

Genuine question! I see this as a learning exercise!
 
So much of these products is aimed at realism, and then when it comes to cabs, we all get so afraid of phase cancellations that we throw all of the realism in the bin?

Phase aligned and normalized.
That's a good compromise in realism for the sake of ease of use.
Experienced users can play with delay/phase parameters, inexperienced users are saved form tears and frustration.
 
Well... are audio engineers allowed to use these products... or..... ??

:hmm
:rofl

Well, you did say this earlier in the thread... ;)

I'm not such a huge fan of this kind of thing. I'd rather make idiot guitarists work harder, than chad audio engineers work harder.

My point is that your requirements, as an audio engineer, are likely quite different to mine, as an idiot guitarist, and you're also far more likely to have the appropriate equipment and expertise to get what you need out of any recording session.
 


Here's a video comparing a real SM57 on a real 4x12 cab, with my VH4 amp.... comparing it to a Recto model on Axe3 going through a cab in Axe3, Helix Native, and NeuralDSP Nolly.

There will be differences in core tonality due to equipment differences. But that isn't the point of the video. The point of the video is to comparing the nature of moving a microphone inside of each product.

Personally, that NeuralDSP one makes me want to puke blood.
 
My point is that your requirements, as an audio engineer, are likely quite different to mine, as an idiot guitarist, and you're also far more likely to have the appropriate equipment and expertise to get what you need out of any recording session.

I don't think the requirements should be all that different really - realism is pretty much the end goal.

Well, you did say this earlier in the thread... ;)
LOL! I was joking!!
 
IMHO - An edge-of-speaker IR should not be the same volume (or louder in some cases, due to perceived volume differences) as a centre-of-speaker IR.
I agree, I also would have preferred the realistic behavior, but I also think that the current solution benefits far more people.
We are like 0.0001%, very few people have experience with phase trickery as a 'tool'.
 
I agree, I also would have preferred the realistic behavior, but I also think that the current solution benefits far more people.
We are like 0.0001%, very few people have experience with phase trickery as a 'tool'.
TBH, my main beef isn't the phase alignment. It's the normalization. It makes the experience of moving a microphone around a speaker a bit unnatural to me.
 
That's a good compromise in realism for the sake of ease of use.
Experienced users can play with delay/phase parameters, inexperienced users are saved form tears and frustration.
I genuinely don't think that preserving the original phase relationship adds any more complexity that would cause problems to a beginner. As soon as you blend mics, you have to decide if you like what you hear or not, and if something doesn't sound right, you adjust. Aligning the timing of mics isn't really a standard procedure in the studio (I know some people who do it, but they're definitely the outliers). I think the blended tones people are used to hearing have the real world time delay in there, so removing it means its much harder to achieve those sounds.

Even when phase aligned, blending 2 mics is going to cause the phase relationship to change (which is the purpose of blending mics in the first place). This will be the case whether the mics are aligned or not.

Another big point here - mics could easily be aligned for each distance, and you still preserve the real time delay when moving backwards. so if you want them to be time aligned, you just set each distance parameter to the same value.

I think aligning the mics no matter what the real world distance is fixes a problem that doesn't exist.

very few people have experience with phase trickery as a 'tool
The engineers doing phase trickery in the studio are the ones who are using delay plugins and aligning mics. The vast majority just throw mics up and listen/move them. In the case of Helix, it would be so easy for them to link the parameters so the delay induced by moving the mic back is compensated for (and then the user can decide if they like the effect or not). I really don't understand going to such lengths across all of modelling, and then throwing it in the bin when it comes to one aspect of it.
 
Also... how fucking DAAARRRKKK does the Nolly cab sound when the microphone is dead centre? I've literally never mic'd up a cab with V30's in the past and gotten that tone in that position. There's some fuckery going on there.
 
Also... how f*****g DAAARRRKKK does the Nolly cab sound when the microphone is dead centre? I've literally never mic'd up a cab with V30's in the past and gotten that tone in that position. There's some *****ery going on there.
I don't think it was meant to be realistic in any way.
They probably chose X amount of good sounding sweet spots, unrelated to the visual position on screen.
 
Back
Top