Axe III dyna-cabs VS Helix VS NeuralDSP

I don't think the requirements should be all that different really - realism is pretty much the end goal.

To be honest, having never mic'd up an amp, realism isn't really even on my radar in this context because I've literally no idea what "real" sounds like - all my "knowledge" about mics, mic placement, and speaker cabinets, has come from what I've learned playing around with MIKKO and the like, and this has generally translated well into picking IRs that I like and more recently in using the QC and Helix 3.50 cab blocks.

My goal is just to get something that sounds good, at least to my ears! :)
 
To be honest, having never mic'd up an amp, realism isn't really even on my radar in this context because I've literally no idea what "real" sounds like - all my "knowledge" about mics, mic placement, and speaker cabinets, has come from what I've learned playing around with MIKKO and the like, and this has generally translated well into picking IRs that I like and more recently in using the QC and Helix 3.50 cab blocks.

My goal is just to get something that sounds good, at least to my ears! :)
Fair enough. I actually acquiesce to that a bit - yes, sounding good is just as important than realism. No sense doing the realistic thing if it sounds outright shit.
 
I really don't understand going to such lengths across all of modelling, and then throwing it in the bin when it comes to one aspect of it.

Where's @FractalAudio when you need him? ;)

naples-pier.jpg
 
I have an issue with this assumption that time aligning the start of the IR=guaranteed improvement in tone, and therefore worth the trade off.

It creates a sound that doesn't happen in the real world, and therefore sounds odd to me. Its all personal preference, but no one can objectively say it makes things better. Its just a different kind of sound that is sometimes cool and sometimes isn't. IMO adjusting the timing from what is realistic should be the creative option, not the standard. If people want to tell themselves that its helping them and making things more simple, then cool, but I really don't think it helps anything other than making it harder to get tones you like from recordings you've grown up listening to.

Hard to say how normal guitarists feel about it when basically every option out there is just assuming that users would prefer the timing lined up. Are STL users complaining about unsolvable phase problems when moving the mics back?
 
Last edited:
I have an issue with this assumption that time aligning the start of the IR=guaranteed improvement in tone, and therefore worth the trade off.

It creates a sound that doesn't happen in the real world, and therefore sounds odd to me. Its all personal preference, but no one can objectively say it makes things better. Its just a different kind of sound that is sometimes cool and sometimes isn't. IMO adjusting the timing from what is realistic should be the creative option, not the standard. If people want to tell themselves that its helping them and making things more simple, then cool, but I really don't think it helps anything other than making it harder to get tones you like from recordings you've grown up listening to.

Hard to say how normal guitarists feel about it when basically every option out there is just assuming that users would prefer the timing lined up. Are STL users complaining about unsolvable phase problems when moving the mics back?

Fractal's current IR interface has a distance parameter on the "Align" tab in MM to create the phase relationship one wants between 2 IRs. Unsure exactly how the new one works.
 


Here's a video comparing a real SM57 on a real 4x12 cab, with my VH4 amp.... comparing it to a Recto model on Axe3 going through a cab in Axe3, Helix Native, and NeuralDSP Nolly.

There will be differences in core tonality due to equipment differences. But that isn't the point of the video. The point of the video is to comparing the nature of moving a microphone inside of each product.

Personally, that NeuralDSP one makes me want to puke blood.

Couldn't one "dial out" the phase alignment and normalization with the existing tools?

From the meter on your video, seems like ~8 dB difference going from cap to edge on the real cab?
And (I haven't installed the beta) the Align tab still seems to be there, presumably it works with DynaCabs? I wonder what alignment figure one would put in to simulate cap to edge...
 
Couldn't one "dial out" the phase alignment and normalization with the existing tools?

From the meter on your video, seems like ~8 dB difference going from cap to edge on the real cab?
And (I haven't installed the beta) the Align tab still seems to be there, presumably it works with DynaCabs? I wonder what alignment figure one would put in to simulate cap to edge...
Yeah you probably could, I've not tried it yet.
 
@MirrorProfiles @Orvillain In a real world "blend two (or more) mics" scenario, you have two different mics with different sensitivities going into two different channels with their own channel gain settings. I presume you don't make sure to calibrate that they're level matched before you place them in front of the cab...? So aren't the levels a crapshoot anyway and you're just using your ears to adjust?
 
Last edited:
Yes, but mult that down to a recording, and you can't change it. Whereas if I record a real SM57 and 421 to separate tracks, I can adjust the relationship on a note-by-note or even riff-by-riff basis. The power is my hands, in other words.
You can still do this, and it’s done pretty much the same way. Everything below is in terms of the Helix, but I expect Fractal has the same capability:

You can use two cab blocks and a split path, and send them out to different channels via USB. That’s essentially the same thing as recording two mics in the real world.

Or, you can also use the dual cab block and make sure they’re fully panned L and R. Preserve the stereo chain all the way to the output, and then split the stereo output to two mono inputs in your DAW.

You could even use separate dual cabs with FX loop sends right after them, to the analog outputs and into another interface, if you wanted to record just the cab output and monitor with other effects.

With the Helix Floor, there’s at least one more way you could do this too, by utilizing its second path. These devices are flexible enough to do just about anything you could want in a studio.
 
When the 3.5 cab update in the helix came, with the visual, I found it globally cool. As ever with this process, you quickly find a tone you like. And it sounds better than the stock irs (to me)
 
You can still do this, and it’s done pretty much the same way. Everything below is in terms of the Helix, but I expect Fractal has the same capability:

You can use two cab blocks and a split path, and send them out to different channels via USB. That’s essentially the same thing as recording two mics in the real world.

Or, you can also use the dual cab block and make sure they’re fully panned L and R. Preserve the stereo chain all the way to the output, and then split the stereo output to two mono inputs in your DAW.

You could even use separate dual cabs with FX loop sends right after them, to the analog outputs and into another interface, if you wanted to record just the cab output and monitor with other effects.

With the Helix Floor, there’s at least one more way you could do this too, by utilizing its second path. These devices are flexible enough to do just about anything you could want in a studio.
He knows...
Not unless you record your virtual cab mics to different tracks and time delay them yourself
 
@MirrorProfiles @Orvillain In a real world "blend two (or more) mics" scenario, you have two different mics with different sensitivities going into two different channels with their own channel gain settings. I presume you don't make sure to calibrate that they're gain matched before you place them in front of the cab...? So aren't the levels a crapshoot anyway and you're just using your ears to adjust?
Yeah - the level is always likely to be adjusted anyway. But this is true whether the IR’s are compensated for or not. There are so many other variables involved the tone that will affect the volume, that having to adjust again doesn’t really bother me much. Adjusting the master volume affects the overall tone a lot, and that requires me to adjust the overall volume. It’s really no different to what would happen when micing up a real amp and cab, it’s just part of the process. No one complains about this stuff when working with real amps and cabs UNLESS it’s a problem, and then you might start thinking about normalising levels or delaying mics. It’s not exactly a default way of working, except with modellers.

We could also volume match preamp gain, and master volume and all kinds of things if going that route - but the mentality there is for accuracy.

@Orvillain see’s the volume normalisation as the bigger problem because it changes how we might perceive a particular mic position differently to how we would in the real world.

I see the delay as a bigger problem because I think volume is definitely going to be adjusted and vary anyway, and different volumes are something we experience all the time having a distant mic reach our ears at the same time as a close mic is not something we’re really used to hearing, and I think it’s actually sacrificing a part of the real world experience for no real reason.

The mics could be recorded in phase with each other and the distances recorded proportionally so that a 57 and 421 at distance A are in phase, but are less delayed than 57 and 421 at distance B (which are also in phase with each other). Blending mics is going to change the phase of the signal anyway, so why not use what would happen in the real world as the starting point and allow the user to adjust from there.

IMO these fix problems that don’t really exist. How many users have ever complained about mics not being aligned? From my experience, it’s only happened when people try and blend IR’s from different IR packs. I understand why people want that, but it’s so removed from how anything is done in studios or on classic recordings. It should be an optional extra rather than the default IMO.

FWIW I’m not saying there isn’t valid situations for this stuff - I just don’t think it’s good that it’s become the standard for most IR loaders. Slight exaggeration here but imagine if DAW’s just assumed you always wanted your tracks compressed, quantised and auto tuned (you know, to make it easy for the non engineer types) and said “oh but you can edit and clip gain them if you want to add the human feel in”. Maybe fine as an option, but it should be the users decision.
 
Last edited:
@Orvillain see’s the volume normalisation as the bigger problem because it changes how we might perceive a particular mic position differently to how we would in the real world.
Yup. Pretty much this!

The edge of a speaker is nearly always bass heavy, but also quieter than the centre - within a window. What we can see is that almost across the board, the edge position is just as loud as the centre position, and also has the bass heaviness - which distorts reality and affects the users ability to judge this stuff correctly, IMHO.
 
IMO these fix problems that don’t really exist. How many users have ever complained about mics not being aligned? From my experience, it’s only happened when people try and blend IR’s from different IR packs.
I hear what you’re saying, but keep in mind that the users and opinions in this forum probably represent about 1% of the overall users of these products. After seeing what people get hung up on and complain about over the years, I would bet my left knee that if real world behavior were matched, there’d be just as many people berating them for not normalizing the IRs to make consistent comparison easier.

One example are the amp levels in these products. I’ve seen numerous pleas for normalized output that is unaffected by master volume and B/M/T settings.

The only way for them to win with everyone is to include a toggle to switch it on and off, I suppose. I can’t see Line 6 ever doing that, but Fractal might.
 
The normalizing thing - you could do it in a way where you don't have to bake it into the IRs themselves. Then it's a global preference, or a cab-block preference.

Same with the alignment thing too tbh.
 
I hear what you’re saying, but keep in mind that the users and opinions in this forum probably represent about 1% of the overall users of these products. After seeing what people get hung up on and complain about over the years, I would bet my left knee that if real world behavior were matched, there’d be just as many people berating them for not normalizing the IRs to make consistent comparison easier.
I totally get this, but is there a more valid counter argument than “well, this is what happens in the real world - but here’s a normalised version”? The main argument in favour of manipulating the IR’s is because otherwise they’ll sound too real. Do we even know if this is a problem? I don’t think I can name any IR’s that aren’t normalised, and it seems most included IR’s are MPT (or at least trimmed so they all start at the same point)

We’ll soon reach a point where the modelling is more than capable of sounding exactly like real gear, and the biggest hurdle is how users use and understand the equipment. Perhaps we’re already at this point. IMO, altering these behaviours with volume and timing obstructs the experience of obtaining the tones we get with real gear. If the ultimate goal isn’t realism, then how can we define what we’re even going for? Just what someone thinks is good or might be helpful?

Emulation and modelling has its work cut out in some ways because there are very defined targets and results we can aim for and compare against. I’m definitely not against improving and going beyond them, but at the same time I don’t see the point in moving the goalposts on what we’re trying to achieve. Something that sounds good or is convenient is totally in the eye of the beholder - it’s not really emulation or modelling at this point though.
 
Last edited:
He knows...
He was referring to adding the phase misalignment back in that post you quoted.

The one I replied to, he was talking about how you’d record individual mics in the real world and be able to adjust them after the fact, but for some reason couldn’t with digital devices. Honestly the thread is confusing because he seems to contradict himself:

You literally *can't* add it back in though, I think that is the ultimate point here. Not unless you record your virtual cab mics to different tracks and time delay them yourself - which most people aren't going to do.
(He knows here you can record to different tracks…)
Isn't that what the delay parameter on the Helix dual cab block is for?

Yes, but mult that down to a recording, and you can't change it. Whereas if I record a real SM57 and 421 to separate tracks, I can adjust the relationship on a note-by-note or even riff-by-riff basis.
(Now it seems he’s saying you can’t record to different tracks?…)
 
Back
Top