Axe III dyna-cabs VS Helix VS NeuralDSP

I usually hate V30s, but this is pretty killer!
Screenshot 2023-04-04 at 15.09.09.png
 
I don't subscribe to the level not being normalized. Surely you don't want to be constantly having to adjust the level whenever you move the mic.

In my somewhat limited studio experience it's always been that the producer/engineer went through great pains to ensure phase alignment when using multiple mics. Heck, Royer makes a special mic clip so that when simultaneously using an SM57 the alignment is optimal. Furthermore my own listening tests have always preferred alignment. Whenever the phase is not aligned you get comb filtering and, while this can be used for special effect, it generally sounds less desirable IMO.

Just because something happens in the real world doesn't mean it's desirable. In this case we can use the power of DSP to perfectly align the mics for the best tone. If the user wants some intentional comb filtering then he can use the Alignment parameters.
 
I don't subscribe to the level not being normalized. Surely you don't want to be constantly having to adjust the level whenever you move the mic.

In my somewhat limited studio experience it's always been that the producer/engineer went through great pains to ensure phase alignment when using multiple mics. Heck, Royer makes a special mic clip so that when simultaneously using an SM57 the alignment is optimal. Furthermore my own listening tests have always preferred alignment. Whenever the phase is not aligned you get comb filtering and, while this can be used for special effect, it generally sounds less desirable IMO.

Just because something happens in the real world doesn't mean it's desirable. In this case we can use the power of DSP to perfectly align the mics for the best tone. If the user wants some intentional comb filtering then he can use the Alignment parameters.
This is all true, and I'm sure there are plenty of producers/engineers who go for 100% phase coherence. Some don't. I think I mentioned before, but Joe Baressi in one of his 'recording rock' videos (not sure if they're available now) said something to the effect of 'if the mics are perfectly in phase, why bother with two mics?' - and that is a view I sometimes subscribe to.

The Fredman technique as mentioned before, necessarily requires the mics to be out of phase slightly, so that the fizzy region of the primary mic is cancelled out.

Euge Valde.. Valde.. Valde... Val-I-don't-know-how-to-bloody-pronounce-his-name has a good video on it here:


Most often the 2nd mic is used more or less as a filter to control higher frequencies on the primary mic, rather than being a distinct mic in and of itself. In fact quite often the 2nd mic sounds like ass. But it's by design ass.

I'm not saying that it's wrong or that it sounds shit, because it clearly doesn't. It just isn't 100% realistic, and we users should all at least be aware of that.
 
In my somewhat limited studio experience it's always been that the producer/engineer went through great pains to ensure phase alignment when using multiple mics. Heck, Royer makes a special mic clip so that when simultaneously using an SM57 the alignment is optimal. Furthermore my own listening tests have always preferred alignment. Whenever the phase is not aligned you get comb filtering and, while this can be used for special effect, it generally sounds less desirable IMO.

By great pains, I think its usually that they'll make sure capsules are aligned, and maybe a quick check with a tone to make sure nothing is off. I know some engineers that do some esoteric micing where they'll time align stuff thats further back with mics that are closer but its more of a niche trick by some people than how the vast majority of people I've encountered work. Generally, its get them close visually - if it sounds good then GREAT, if it doesn't move one a bit. Its even easier to do in a modeller, and sometimes those slight timing differences may cause a desirable outcome. How many recordings made with real amps and cabs and mics have them aligned to the extent that IR's can be? Do all MPT (or alike) blended tones inherently sound better than those that aren't aligned? I don't really agree with that assumption that its what everyone always wants. The ideal is that its a behaviour that can toggled on or off and the user can decide what works better for them for any situation.

Its not unlike @Orvillain 's example above with the drum miking - the only time I know that people time align multiple mics is when the recording has been an absolute nightmare. Besides that, engineers will check phase and listen, but having mics perfectly aligned comes second to a good sounding phase relationship.
 
Just because something happens in the real world doesn't mean it's desirable. In this case we can use the power of DSP to perfectly align the mics for the best tone. If the user wants some intentional comb filtering then he can use the Alignment parameters.
This gets into that weird area where people want all the warts of tube amps etc modeled but in reality find those aspects undesirable.

I also totally agree that it's not desirable to have to adjust volume for adjusting mic position, even if it were more realistic. It would just end up with users asking why and at the same time most people would just pick "this is louder" as the "better" sound.
 
It would just end up with users asking why and at the same time most people would just pick "this is louder" as the "better" sound.
You realise that normalizing using peak dB isn't the same thing as making sure the things are the same volume right? Hence why Neural's edge tones sound louder than the cap tones, which is completely the opposite to most guitar recordings.

They should've gone with the tagline "Vulgar Display of Fucking Idiocy" :rofl

It's possible that Cliff is using a more RMS based approach, but it's still not a perfect normalization approach, but tends to be the best we've got without delving into MelSpectrograms.
 
I don't subscribe to the level not being normalized. Surely you don't want to be constantly having to adjust the level whenever you move the mic.

In my somewhat limited studio experience it's always been that the producer/engineer went through great pains to ensure phase alignment when using multiple mics. Heck, Royer makes a special mic clip so that when simultaneously using an SM57 the alignment is optimal. Furthermore my own listening tests have always preferred alignment. Whenever the phase is not aligned you get comb filtering and, while this can be used for special effect, it generally sounds less desirable IMO.

Just because something happens in the real world doesn't mean it's desirable. In this case we can use the power of DSP to perfectly align the mics for the best tone. If the user wants some intentional comb filtering then he can use the Alignment parameters.

I think the vast majority of owners/users of Fractal gear are going to be absolutely ecstatic with the update once it's universally rolled out. There will always be people who find something to complain about. Keep up the great work!
 
I think the vast majority of owners/users of Fractal gear are going to be absolutely ecstatic with the update once it's universally rolled out. There will always be people who find something to complain about. Keep up the great work!
It's not a complaint.
 
This is all true, and I'm sure there are plenty of producers/engineers who go for 100% phase coherence. Some don't. I think I mentioned before, but Joe Baressi in one of his 'recording rock' videos (not sure if they're available now) said something to the effect of 'if the mics are perfectly in phase, why bother with two mics?' - and that is a view I sometimes subscribe to.

The Fredman technique as mentioned before, necessarily requires the mics to be out of phase slightly, so that the fizzy region of the primary mic is cancelled out.

Euge Valde.. Valde.. Valde... Val-I-don't-know-how-to-bloody-pronounce-his-name has a good video on it here:


Most often the 2nd mic is used more or less as a filter to control higher frequencies on the primary mic, rather than being a distinct mic in and of itself. In fact quite often the 2nd mic sounds like ass. But it's by design ass.

I'm not saying that it's wrong or that it sounds s**t, because it clearly doesn't. It just isn't 100% realistic, and we users should all at least be aware of that.


The guy in this video has to be related to Johan Segeborn :)

Ben
 
Heya, I don't wanna belabour the point, but I decided to reamp some of my Axe III impulses loaded into Cab-Lab with MPT processing turned off and all the filters not doing anything, etc etc. So trying to get a WAV sweep of my impulses for my Egnater 4x12 cab.

Here are the level stats:
1680866678971.png


The input signal was a 48kHz 60second long frequency sweep generated in Voxengo Deconvolver. You can see the differing peak measurements, which are always a bit different with the input sweep versus a live guitar tone (it seems anyway!)

This is why I believe normalization is not a good idea. There's a lot of variation here that would be lost if I just normalized each IR to a standard arbitrary value.
 
Heya, I don't wanna belabour the point, but I decided to reamp some of my Axe III impulses loaded into Cab-Lab with MPT processing turned off and all the filters not doing anything, etc etc. So trying to get a WAV sweep of my impulses for my Egnater 4x12 cab.

Here are the level stats:
View attachment 6295

The input signal was a 48kHz 60second long frequency sweep generated in Voxengo Deconvolver. You can see the differing peak measurements, which are always a bit different with the input sweep versus a live guitar tone (it seems anyway!)

This is why I believe normalization is not a good idea. There's a lot of variation here that would be lost if I just normalized each IR to a standard arbitrary value.
You're not "losing" any information. It's simply volume normalization, which is a good thing. It means you don't have to constantly compensate for the change in volume when you move the mic.
 
You're not "losing" any information. It's simply volume normalization, which is a good thing. It means you don't have to constantly compensate for the change in volume when you move the mic.
I know what volume normalization is, and I totally understand your position on this. We'll just have to agree to disagree. I don't find the behaviour useful or realistic, there are many cases in the studio where you'd let the natural level differences between multiple microphones determine how much frequency cancellation is going on (see previous video on Fredman technique) - we're certainly not always making sure that all microphones on a source are the same level; see the typical scenario of micing up a snare drum from the top and bottom for instance.

But ultimately, it isn't my product. I've said my bit. Peace.
 
Joe Baressi in one of his 'recording rock' videos (not sure if they're available now) said something to the effect of 'if the mics are perfectly in phase, why bother with two mics?' - and that is a view I sometimes subscribe to.
First, two mics placed in front of a guitar cab will never be "perfectly in phase" with each other, not even two examples of the same model microphone. The reason why this must be so will be trivially obvious to anyone who has any business participating in discussions about phase.

Second, if, by "in phase" you really mean arrival-time aligned, then you should realize that a user of IRs has the option of creating intentional misalignments of arbitrary magnitude among different IRs.

Given the complete absence of any reference to SPL in .wav files - not to mention the widely-varying excitation levels and signals used to acquire IRs - normalization is the only defensible approach. If you're looking for a way to account for differing speaker sensitivities, here's a word to the wise: this ain't it, and that can't change.
 
Last edited:
Back
Top