@MirrorProfiles @Orvillain In a real world "blend two (or more) mics" scenario, you have two different mics with different sensitivities going into two different channels with their own channel gain settings. I presume you don't make sure to calibrate that they're gain matched before you place them in front of the cab...? So aren't the levels a crapshoot anyway and you're just using your ears to adjust?
Yeah - the level is always likely to be adjusted anyway. But this is true whether the IR’s are compensated for or not. There are so many other variables involved the tone that will affect the volume, that having to adjust again doesn’t really bother me much. Adjusting the master volume affects the overall tone a lot, and that requires me to adjust the overall volume. It’s really no different to what would happen when micing up a real amp and cab, it’s just part of the process. No one complains about this stuff when working with real amps and cabs UNLESS it’s a problem, and then you might start thinking about normalising levels or delaying mics. It’s not exactly a default way of working, except with modellers.
We could also volume match preamp gain, and master volume and all kinds of things if going that route - but the mentality there is for accuracy.
@Orvillain see’s the volume normalisation as the bigger problem because it changes how we might perceive a particular mic position differently to how we would in the real world.
I see the delay as a bigger problem because I think volume is definitely going to be adjusted and vary anyway, and different volumes are something we experience all the time having a distant mic reach our ears at the same time as a close mic is not something we’re really used to hearing, and I think it’s actually sacrificing a part of the real world experience for no real reason.
The mics could be recorded in phase with each other and the distances recorded proportionally so that a 57 and 421 at distance A are in phase, but are less delayed than 57 and 421 at distance B (which are also in phase with each other). Blending mics is going to change the phase of the signal anyway, so why not use what would happen in the real world as the starting point and allow the user to adjust from there.
IMO these fix problems that don’t really exist. How many users have ever complained about mics not being aligned? From my experience, it’s only happened when people try and blend IR’s from different IR packs. I understand why people want that, but it’s so removed from how anything is done in studios or on classic recordings. It should be an optional extra rather than the default IMO.
FWIW I’m not saying there isn’t valid situations for this stuff - I just don’t think it’s good that it’s become the standard for most IR loaders. Slight exaggeration here but imagine if DAW’s just assumed you always wanted your tracks compressed, quantised and auto tuned (you know, to make it easy for the non engineer types) and said “oh but you can edit and clip gain them if you want to add the human feel in”. Maybe fine as an option, but it should be the users decision.