Calibrating Input Level for Plugins

Audio Assault:
Screenshot 2022-11-17 at 19.36.14.png



He said they're not using SPICE and he just has it set for "DI's to peak around -6dB".

I asked Softube for a specific value and got this response:

"Well, "most accurate response" isn't really the valid question...
The model will react exactly as an amp. Push the input and it will distort more, just as the real thing.

The relevant question is some kind of 0 VU level reference. Our operating points are usually set somewhere in the -18 to -6 dBFS, and the exact level varies between plug-ins. The level is set when designing the plug-in to catch the feel of using the real unit. If distortion is an important part of making the model feel right, the level is set lower, and vice versa."

I asked again if they can let me know what those specific values are, so lets see.

Mercuriall also gave me a response:
The manual has recommendations for setting the input level.

"Recommendations for the plugin Input Gain settings
Please note - below is just our recommendation. You can set it up differently, if you wish.
Play your stringed instrument as hard as you can - for example some power chords on lower (bass) strings. Now observe what signal is coming into your DAW. We recommend that you aim for -6 dB. It means that when you hit strings hard, the input meter in your DAW will sometimes peak at -6 dB. If you are getting too close to clipping (0 dB), turn down the gain knob of your audio interface. If you are hitting strings hard and cannot reach -6 dB, then turn the gain knob up on your audio interface.
Once you find the optimal gain knob position, move on to setting up the Input Gain in Spark. Here are the recommended values:
1-2 for single-coil pickups
2-4 for low/mid-gain humbuckers
5-6 for high-gain humbuckers
Feel free to try other values if the recommended ones do not work for you."

and when I pressed them for a value, I got this:

Setting one specific level and changing pickups/guitars is not the best option. We recommend getting the best signal to noise ratio, for each guitar it will be different gain values on your audio interface. After that, adjust the input level in the Ampbox, depending on which pickups you are using.

Hard to know what else to ask from them if they're the best kind of answers available. I might go and bang my head against the wall and see if it helps me get out of 2017 and back to 2022. (I should add, all my dealings with these companies has been great and theyre consistently quick to reply and friendly and helpful. Not all companies are, and Softube's customer support in particular are really impressive).

I'm becoming increasingly curious as to why this is such a secretive thing, unless they just keep adjusting at the last minute to make sure its hard to fuck up with no matter what device you're using.
 
The whole "get the best signal to noise ratio" is so moot to me. I don't think I've recorded anything with considerable noise (from the preamps+conversion) since about 2004.

LOL at anyone getting more noise from setting their interface gain 3dB too low than they are from EMI, or background amp noise, or pedal noise etc. Im using a Class A DI box and an A/D with ridiculous specs - even with most gear these days it really shouldn't be a concern unless several factors are REALLY wrong.

Screenshot 2022-11-17 at 20.12.23.png
 
Last edited:
765.jpg


Okay, so far the only company that knows what we/they are talking about is STL.

Even if they all using SPICE models from github, which by default convert the Signed PCM to virtual 2.0Vpp, it must have come to their awareness that some guitars are hotter than 2Vpp and some compensation for that is needed?
If they compensate for that in their design with "internal boost" at least give us this value and validate that your reference is 0dBFS as 1Vp like STL did.

EDIT:
I really think they all need to cut the "play hard to hit -xdB" crap and provide real numbers for people who ask for them or shut up about realism and accuracy. At least have them on your website if not in the manual.
 
Last edited:
I was actually wondering if we could use the fact that a lot of the plugin alliance amp sims were exclusively on UAD first. Maybe they were calibrated for use with their Apollo interfaces?

I believe the Softube Marshalls were UAD first too, and then UA have their own Fender sim
 
I was actually wondering if we could use the fact that a lot of the plugin alliance amp sims were exclusively on UAD first. Maybe they were calibrated for use with their Apollo interfaces?

Well, there are a few commonly used dBFS alignment levels used in various production industries, most of them in reference to dBu.

Outside the US many use +4dBu = -18dBFS or -18dBFS = 0VU or +22dBu = 0dBFS, ALL the same.
So UAD did not invent the wheel, they are using the most common alignment levels in professional production.

Source:
 
Last edited:
Well, there are a few commonly used dBFS alignment levels used in various production industries, most of them in reference to dBu.

Outside the US many use +4dBu = -18dBFS or -18dBFS = 0VU or +22dBu = 0dBFS, ALL the same.
So UAD did not invent the wheel, they are using the most common alignment levels in professional production.

Source:
What I meant was, some of these plugins were developed before Apollos were even a thing but the amp sims plugins on their system would have been designed to be used with the Apollo interfaces for low latency recording.

I think the EQ’s and compressors are generally at +4dBu = -18dBFS because it’s just a general standard and leaves a good amount of headroom, but they do seem to use different calibrations where the signal may be hotter (like the Ampex and bus compressors in the list above).

That information above only applies to UA developed plugins so I don’t think we would be able to assume the brainworx developed stuff is always the same as that. But I think the amp sim plugins would have been optimised for use with the Apollo DI’s and A/D calibration.
 
But I think the amp sim plugins would have been optimised for use with the Apollo DI’s and A/D calibration.
Yeah, makes sense.

Apollo Twin X manual (LINK) for Hi-Z Input:
Maximum Input Level (at minimum gain) = 12.2 dBu.

1668763662676.png


Using this converter: dB to Volts or this Converter.
12.2 dBu = 3.156v RMS = 4.46v Peak = 0dBFS

And 0.707v RMS at the input with gain at zero would hit:
20*log(0.707/3.156) = -13dBFS

That's very close to the internal calibration of -12dBFS Helix Native, so if anyone is using Apollo Twin X leave the interface gain at zero and boost by +1db in Native.

EDIT:
All the Focusrite Scarlett 3rd Gen High Z Instrument Inputs have a Maximum Input Level = 12.5dBu (at minimum gain).
0.707v RMS will hit -13.29dBFS, so keep interface gain at zero and boost by +1.3dB for Native if you have Scarlett Gen 3.

1668762038608.png
 
Last edited:
@TheTrueZoltan!

PreSonus Quantum 2 (Manual):

1668762696169.png


+15dBu = 4.356v RMS.
20*log(0.707/4.356) = -15.79 dB.
20*log(0.500/4.356) = -18.80 dB.

With a 0.707v RMS at the Instrument input it should hit -15.79dBFS, or with 0.5v RMS should hit -18.8dBFS.

This doesn't sit well with your real measurements.
The instrument input of my audio interface (PreSonus Quantum 2) has to be boosted by 12dB to have the optimal level for Helix Native.

:unsure:
 
Last edited:
[...]

With a 0.707v RMS at the Instrument input it should hit -15.79dBFS, or with 0.5v RMS should hit -18.8dBFS.

This doesn't sit well with your real measurements.


:unsure:
Then something must be wrong with your calculations, James. When I still had my HX Stomp, 12dB was the amount of gain I had to add on my Quantum to match the DI level of the Stomp. And the DVM test I did yesterday confirmed this.

Another strange thing I experienced: When I add 12dB of gain directly on my interface, the noise floor stays quiet. But when I use a gain plug-in in my DAW instead to add 12dB, the noise floor rises significantly. This should normally be the opposite... however, the effect it has on the guitar DI level is the same.
 
Then something must be wrong with your calculations, James. When I still had my HX Stomp, 12dB was the amount of gain I had to add on my Quantum to match the DI level of the Stomp. And the DVM test I did yesterday confirmed this.

That's weird, 12db is far out of the +15dBu spec given by PreSonus.

I've tested with my Focusrite Saffire, fed the Instrument Input with a 1kHz sine that reaches 0dBFS and measured the RMS voltage, it's right on with the +7dBu spec in the user manual.
I can be wrong but so far calculations match measurements here.


Interface Max Input.jpg


RMS to dBu.jpg


----

Another strange thing I experienced: When I add 12dB of gain directly on my interface, the noise floor stays quiet.

Yes, the analog gain on the audio interface doesn't raise the noise floor as much as boosting in DAW.
You can boost the analog gain if you have headroom but you will have to re-calibrate every time you touch it, that's why I suggested to keep it as zero for consistency sake.
It's very strange that you have to boost by 12db, it's way out of the given spec, that will be terrible for the noise floor if you boost 12db in DAW.
 
Last edited:
[...] I've tested with my Focusrite Saffire, fed the Instrument Input with a 1kHz sine that reaches 0dBFS and measured the RMS voltage, it's right on with the +7dBu spec in the user manual. [...]
Ha, the loudest 1kHz sine wave I can create in Logic without clipping the Master Output (according to the manual the maximum output level is 18dBu, the DVM reads 6.175V (=18.03dBu), so that correlates closely) results in only -5.8dBFS when fed back into the Instrument Input. A 0.708V sine wave reaches -24.3dFS, 0.501V lands at -27.3dFS.

:columbo

I tested again and again, but a 12dB boost is definitely needed to get the level right for Helix Native. Checked all my settings, but everything's correct. It's not really a problem, though. The noise floor is absolutely fine when I add the gain on the interface itself. All good.
 
Last edited:
Latest stance from Softube:

"How do you use guitar amp? Not many guitar players use a voltmeter to set their levels.

The converter isn't really relevant, the plug-in does't really see that, it sees the dBFS levels after the converter...

I don't think we have a list of operating points, however in the case of the guitar amp models it isn't entirely relevant, as that is basically exactly the way a guitar amp works. More gain in, more distortion in the preamp, and vice versa...

But somewhere in the region -18 to -6 is a starting point. I think that in the case of our guitar stuff, you can probably aim in the higher part of that interval first then reduce gain somewhere if you get a bit too much distortion."

Interestingly, I found this post regarding the UAD calibration levels. It seems it took quite a lot of work to get this information from them, but thankfully they understood why it matter and have included this information ever since.
 
EDIT:
All the Focusrite Scarlett 3rd Gen High Z Instrument Inputs have a Maximum Input Level = 12.5dBu (at minimum gain).
0.707v RMS will hit -13.29dBFS, so keep interface gain at zero and boost by +1.3dB for Native if you have Scarlett Gen 3.

View attachment 2540

I was going to join this dicussion by commenting just that about Scarlett Solo.

I started looking with more attention to input levels after switching to HX Stomp as interface, and, without a gain knob or block, all my presets started sounding really different with different guitars, then I was WTH?!

So, after some research, I ended with the same conclusion: can't just set everything to the max "not clipping" stage, it's not how things work.

Then I have calibrated my Scarlett Solo 3rd with a boost of 1db, using my HX as reference (missed the .3 at some point, but it's close)

I think Native takes it from there, so, at some point within the modeling, with a 12db boost, my signal will be at exact levels as if it was connected to the real amp.

With STL, we'll need a 6db boost instead of 12db to reach this same point.

So, with my Dimarzio (medium to high output) equipped guitar, after all calibration, the peaks be at about -12dbFS with a ~-18dbFS RMS (of course depends on what's being playing).

Enters NeuralDSP, at some point they sent an email with this:
"GAIN STAGE - Use a gain plugin when it’s time to mix to turn your guitar’s average level to -18dBFS and get that clean, almost noise-free recording." (GAIN STAGE word, in my opinion, points out that we are talking about input level).

Which is annoying because it leads to the problem where all guitars have about the same level and so on, but, at the same time, gives a hint of maybe what's expected by the plugin.

Regardless, using the Soldano model on both plugins (which I believe are very accurate models) and I know at least my HX Stomp and Focusrite are correctly calibrated, I compared the amount of gain and to me sounds like it's correct (if there's a difference, maybe it will be between +-1.5db).

That being said, although I don't agree with that, maybe they see this as part of modeling. Maybe the internal boost is not the first stage before "entering" the magic, something like that. In any case, just a hint of what is expected by the developers would be awesome.

Thanks James Freeman and MirrorProfiles for all the great info in this topic!
 
So the conclusion would be that we are looking for an audio interface with a maximum input level of exactly 11.5dBu (and a 1MΩ - or even variable - input impedance, of course)?! Arturia MiniFuse

:banana
 
Last edited:
Then I have calibrated my Scarlett Solo 3rd with a boost of 1db, using my HX as reference (missed the .3 at some point, but it's close)
+1db matches the measured value of -12.3, so yeah, exactly +1db of boost for Focusrite Scarlett interfaces.


So the conclusion would be that we are looking for an audio interface with a maximum input level of exactly 11.2dBU (and a 1MΩ input impedance, of course)?! :banana

The Arturia MiniFuse Interfaces seem pretty close: 11.5dBU

dBu :p
So many darn dB formats that every letter means something entirely else. :LOL:

Yeah, Maximum Input Level of +11.5 dBu is the perfect interface for Native, gives you exactly the same level as the HX hardware without touching anything.
 
Or maybe the developers can answer the question:

"Call you please tell us the specs of the hardware used when you were testing the accuracy of your model so we can match it with any interface?"

Without that, we have precision but not accuracy. We have a very consistent bow but it's not aiming at the center.
 
Best thing ever would be if Line 6 could optimize the drivers for the Helix/HX devices to bring down the RT latency. Some of us would probably not need other interfaces anymore and just use a Helix instead. I know I would.
 
Last edited:
Back
Top