Axe III dyna-cabs VS Helix VS NeuralDSP

I keep Helix Control on my keyboard controller specifically to speed up editing, even when using HX Edit.
See this is why modeler manufacturers should start thinking of offering a desktop controller for their units. Put some knobs on there, quick instant access switches...you know, just look at what the keyboard manufacturers are doing.

season development GIF
 
Me neither, they chose just the right amount of samples to be enough.
You have this sort of thing in drum sample land too.

20 samples.. definitely not enough.
50 samples... good enough for kick drums, but not snares and toms and cymbals.
100 samples.... tbh as long as you play them properly when tracking, 100 samples is enough.
130+ samples... absolutely no way anyone is going to be able to tell that a drum shell isn't a real one. They might with cymbals.

Of course this requires memory and space. BFD Dark Farm is 200+gb when it isn't compressed using our in-house BFDLAC format. It comes down to about 90gb once we do so.

If we softly gated the samples and killed a little bit of the drum sustain, I could get that down to 40gb... but then we start to sound like some of the competition.
 
Dyna-Cabs have a ¼ of the mic models that the Helix cabs have, so it seems to me that Fractal and Line 6 have simply taken different approaches to reduce the number of IRs required to support a cabinet with moveable mics.
I don't want to turn this into a VS argument but all the mics in Helix are usable and some cabs even require a specific mic to sound good.
 
I don't want to turn this into a VS argument but all the mics in Helix are usable and some cabs even require a specific mic to sound good.
If you really want to see a VS argument, go to TGP and tell them that the Dyna-Cabs are just IRs and are nothing new. I saw this one moron who did that and you should've seen the controversy that followed. That dude really needs to check the empty space between his ears sometimes!
 
The main thing this has taught me is... I really don't like the Neural Nolly cab (the 3rd one, which I think is a 4x12 Rectifier OS cab)
 
I think the delay part is pretty easy to "fix" on the fractal.
There's a distance control in the dyna-cab main page which adjusts the actual distance between the cone and the mic (with no delay introduced) from 0 to 24 cm, and another distance control in the align tab which adjusts the delay of every cab slot (can't remember the range of this control but it might be different since it's shown in mm).
The fix could be to make them have the same range of values and a switch to link the two controls, so that if one wants to have a realistic behaviour just turns on the switch and the distance knob in the align tab will follow the one on the main tab, with the switch off it will stay at 0 instead.
 
Bingo.

Like, the argument of “well they’re guitarists, not audio engineers” doesn’t hold up well when all the main modellig platforms have access to a pile of different EQ’s, compressors etc. There’s all kinds of potential for doing damage, but we’ll draw the line at IR’s that have the potential to be a bit delayed and this MIGHT cause a problem? There’s more than enough tools available to make a phasey mess.

The chart for aligning also sums my point up - this is what people should use if they want to manually align a distant mic to a close one OR it should be internally linked with an option to have it off.

Bingo.

Like, the argument of “well they’re guitarists, not audio engineers” doesn’t hold up well when all the main modellig platforms have access to a pile of different EQ’s, compressors etc. There’s all kinds of potential for doing damage, but we’ll draw the line at IR’s that have the potential to be a bit delayed and this MIGHT cause a problem? There’s more than enough tools available to make a phasey mess.

The chart for aligning also sums my point up - this is what people should use if they want to manually align a distant mic to a close one OR it should be internally linked with an option to have it off.
The "audio engineer" nonsense in relation to this tool is dumb. If you are truly an audio "engineer", you'll just leave the thing in Legacy mode and go make your own IRs, so that all of the tiny little nuanced phase things that will only be present when you physically have two mics in front of the cab at the same time capturing at the same time will be preserved.

As noted, even if they'd kept the the IRs raw, you can't really go back and recreate two mics at different angles relative to the cone with 100% accuracy. If you are an actual audio engineer and not some ape with swollen testicles going deaf with closed back headphones in front of a blaring cab trying not to trip on cables while chugging and moving mics ( :beer :rofl:beer O)

This tool is inherently made for people that find IRs "difficult to sort through" - no audio engineer is going to be just blindly flipping through a catalog of every cab and every mic of that cab when dialing in a song. That shit is left for the sexy-room guitar players.
 
The "audio engineer" nonsense in relation to this tool is dumb. If you are truly an audio "engineer", you'll just leave the thing in Legacy mode and go make your own IRs, so that all of the tiny little nuanced phase things that will only be present when you physically have two mics in front of the cab at the same time capturing at the same time will be preserved.

This tool is inherently made for people that find IRs "difficult to sort through" - no audio engineer is going to be just blindly flipping through a catalog of every cab and every mic of that cab when dialing in a song. That s**t is left for the sexy-room guitar players.
This totally misses my point though (other than the fact I have made my own IR's because it was hard to find particular attributes I was looking for).

What makes time aligned IR's less "difficult to sort through" than ones that preserve their natural timing? and is there even any examples of cabinet sections of existing products that have suffered from having the original phase response of the IR's? Have you run into problems that were caused by IR's not being aligned? MPT or not, you can get into just as many phase problems if you don't know what you are doing. All kinds of process affect the phase, both subtly and noticeably. Its something we rely on when creating sounds.

I think most people can understand that moving away from a sound causes a delay - we shouldn't just assume that this is beyond the grasp of the average user. Changing that behaviour is a trade off that only really benefits blending uncommon combinations like blending a close mic'd 4x12 with a distant mic'd 1x12. In that instance, I probably would do some time aligning to get things in phase IF moving the mics a little or flipping polarity didnt sound good. and in that case, IMO, it should be an option, not the standard way of working.

Anyone who thinks they absolutely need to blend every single mic, cab and position, probably needs to spend more time finding the right cabinet/speaker/mic/IR in the first place. Its like when people pile tons of EQ or other processing on because something else in the chain is making things worse. Time aligning stuff is done by some people, but its more of a niche trick than something that commonly goes on in the studio.

As noted, even if they'd kept the the IRs raw, you can't really go back and recreate two mics at different angles relative to the cone with 100% accuracy. If you are an actual audio engineer and not some ape with swollen testicles going deaf with closed back headphones in front of a blaring cab trying not to trip on cables while chugging and moving mics (
:beer
:rofl
:beer
)

Its not necessarily about recreating an exact tone, its about BEING ABLE to do it, or at least for it to behave in a manner that is similar to what would happen if you were to do it in the real world. Maybe someone has made a dynamount that can automatically adjust a delay and line up 2 mic signals based on position? Is this REALLY the behaviour we want to be aiming for?
 
The "audio engineer" nonsense in relation to this tool is dumb. If you are truly an audio "engineer", you'll just leave the thing in Legacy mode and go make your own IRs, so that all of the tiny little nuanced phase things that will only be present when you physically have two mics in front of the cab at the same time capturing at the same time will be preserved.
....
This tool is inherently made for people that find IRs "difficult to sort through" - no audio engineer is going to be just blindly flipping through a catalog of every cab and every mic of that cab when dialing in a song. That s**t is left for the sexy-room guitar players.
Without capturing all of the nuance of mics in front of a speaker - which is 100% possible, just don't fuck with the IR's once you capture them - then these tools really are just a fancy UI over the top of a menu selector or a pair of next+previous buttons.

I think there's a tacit assumption that having the IR's all time-aligned by default will lead to better sounds, and I don't think that has been proven. Quite hard to prove it too when you don't have the same equipment that the IR creator does.

Also, my testicles are no longer swollen. I took the pills.
 
Without capturing all of the nuance of mics in front of a speaker - which is 100% possible,
Nah. If you've got a 57 and 421 right next to each other, the only way to 1000% capture all the nuance of that is with both mics in front of the cab when IR(s) is/are captured.

Seems like they had two choices, based on memory alone: (1) save IRs raw, and then let folks go in and time align as best they can or two (2) save files as phase/time aligned, and then let those that want to mess with them mess with them.

For those that WANT phase coherence, my understanding is that #2 is going to give not just faster, but better results. For those that want to introduce phase differences, my understanding is that you're more likely to get "close enough" usinge #2 than those that want phase coherence are using #1. Somebody had to be the loser.

My understanding could be wrong, as my screen name strongly suggests.
 
Nah. If you've got a 57 and 421 right next to each other, the only way to 1000% capture all the nuance of that is with both mics in front of the cab when IR(s) is/are captured.
Not at all. If you have two IR's running in parallel, with no time-aligning or volume normalization going on - ie; maintaining the original capture as it was - you should get the same sound as recording the two mics to their own tracks. I can't think of a reason why you'd lose anything. The tone of both mics being played together comes from how they sum and phase cancel across the frequency spectrum. The same thing happens when using two IR's.

The thing that breaks this is the normalization and alignment post-processing.

Somebody had to be the loser.
Yeah, and unfortunately as ever, the Apple-ification of the human mind, the universe and everything, continues to afflict us until our dying days. Probably.

You *can* adjust the alignment page 'distance' parameter on Axe III, and the 'delay' parameter on Helix, and more or less emulate the relationship. But you need to have Swirly's graph to hand all of the time, and his figures are close enough approximations from what I can gather.

If Line 6 and Fractal added a way to link their delay/distance parameters to the movement of the microphone, and weight the values so that they're realistic to what would happen in the real world... then that would solve it. But I don't see that happening.

Neural don't give you a parameter to control, so their cab block will NEVER sound realistic when using multiple microphones.
 
For what it is worth, here is a clip of two SM57 mics. One static, and one being swept around the front of a speaker:

You can't get easily (because you need to adjust a 3rd parameter separately) that kind of interaction from these cab blocks, so they're not realistic.
 
Last edited:
Not at all. If you have two IR's running in parallel, with no time-aligning or volume normalization going on - ie; maintaining the original capture as it was - you should get the same sound as recording the two mics to their own tracks. I can't think of a reason why you'd lose anything. The tone of both mics being played together comes from how they sum and phase cancel across the frequency spectrum. The same thing happens when using two IR's.

The thing that breaks this is the normalization and alignment post-processing.


Yeah, and unfortunately as ever, the Apple-ification of the human mind, the universe and everything, continues to afflict us until our dying days. Probably.

You *can* adjust the alignment page 'distance' parameter on Axe III, and the 'delay' parameter on Helix, and more or less emulate the relationship. But you need to have Swirly's graph to hand all of the time, and his figures are close enough approximations from what I can gather.

If Line 6 and Fractal added a way to link their delay/distance parameters to the movement of the microphone, and weight the values so that they're realistic to what would happen in the real world... then that would solve it. But I don't see that happening.

Neural don't give you a parameter to control, so their cab block will NEVER sound realistic when using multiple microphones.
To your first point, you are missing my point entirely. An SM57 with nothing around it is going to capture a very slightly differently nuanced IR as compared to a. sM57 with a 421 sitting right next to it and slightly forward of it. I.e., if we are talking ultimate nuance, the physical presence of a second microphone in some scenarios will affect the signal captured by the first microphone.

As to your second point...if you don't wanna apple-ify your axe III, just use legacy mode for cabs and the in-built tool in the Axe fx to capture your own IRs.
 
To your first point, you are missing my point entirely. An SM57 with nothing around it is going to capture a very slightly differently nuanced IR as compared to a. sM57 with a 421 sitting right next to it and slightly forward of it. I.e., if we are talking ultimate nuance, the physical presence of a second microphone in some scenarios will affect the signal captured by the first microphone.
I await your measurements. I seriously doubt it, considering the pickup pattern of most cardioid and super-cardioid microphones.

As to your second point...if you don't wanna apple-ify your axe III, just use legacy mode for cabs and the in-built tool in the Axe fx to capture your own IRs.
I refuse!! :rofl


I confess I find it curious that your inquisitiveness goes as far to consider the implications of the physical presence of another microphone, but not far enough to wonder if time-aligning and normalizing IR's is actually the best thing to do or not. Ho hum.
 
Seems like they had two choices, based on memory alone: (1) save IRs raw, and then let folks go in and time align as best they can or two (2) save files as phase/time aligned, and then let those that want to mess with them mess with them.

"time align as best they can" is sort of moot, because you either want the mics aligned in time (and they can do that for you already without needing to MPT every single distance), or you want some timing differences. If you want to have one mic pulled back, and one up close then you're purposefully introducing a timing difference for tonal effect - its a desired result, not a problem that needs to be solved every single time without question. If the user wants to mess with that relationship, then they should be able to with the delay parameters. If they want to have things aligned, just set the mics to the same distance, and hope that they aligned capsules when capturing IR's. Messing around with things should be reserved for the situation that requires manipulating time over the one that captures what actually goes on.
For those that WANT phase coherence, my understanding is that #2 is going to give not just faster, but better results. For those that want to introduce phase differences, my understanding is that you're more likely to get "close enough" usinge #2 than those that want phase coherence are using #1. Somebody had to be the loser.

What guarantees it'll be better? there are FAR too many variables involved to make any kind of claim of faster or better. Phase coherence isn't absolute at every frequency. It is a desired outcome when blending mics, and thats why I think its important that it reflects what actually happens in the real world. Mic's can be time aligned and still at out of phase at frequencies that cause the tone to sound bad.
Blending mics is all about what happens with the phase - its not as black and white as having everything in phase and it automatically sounding good, or everything being totally out of phase and automatically sounding bad. If everything is time aligned, then you have to introduce extras steps and guesswork to approximate real-life behaviours. This is totally backwards to me.

This idea that cabinets having their natural delay is some kind of oddity that only audio engineers are going to care about just doesn't hold up either. How would non-engineers know that they don’t want to have realistic phase interactions? It could be just the thing they were missing without realising it. It could well be a key part of tones they like, and the current behaviour of many cab sims makes this harder to achieve than just throwing mics on a cab and doing it for real. This sort of opposes what I think modelling should offer, which is options and convenience, with minimal (but inevitable) compromise.
 
I await your measurements. I seriously doubt it, considering the pickup pattern of most cardioid and super-cardioid microphones.
probably going to get more variation from using different preamp channels, or say 2 different versions of the same mic (ideally sm57 1 and sm57 2 would be different mics with their own tolerances, rather than the same mic twice but whatever).
 
Again, I can relate this to making BFD content.

It's a bit like arguing that time aligning overheads always sounds better, so all the content gets that done to it as standard... and then we tell the naysayers... "Balls to you! Use the room delay parameter!"

We'd be hauled over hot coals if we ever proposed such a thing.


And I can tell you from experience, it usually doesn't sound better. It makes the transient of a drum less impressive, and it messes with the tone.
 
Last edited:
I confess I find it curious that your inquisitiveness goes as far to consider the implications of the physical presence of another microphone, but not far enough to wonder if time-aligning and normalizing IR's is actually the best thing to do or not. Ho hum.
Oh, it's worse than that. I actually intend to just keep on keepin' on in Legacy mode because...I've already got a solid, small, manageable collection of IRs from the factory set that work great for everything I do. Here I'm just "trying to give perspective", aka, :stirthepot:stirthepot:stirthepot:beer:guiness
 
Back
Top