Calibrating Input Level for Plugins

how to send a sin wave and calculate the amount of gain we added on the interface (for the electric guitar)
somebody tell please
how do we use "tone generator plugin" to create sin wave on logic pro for electric guitar
In REAPER:
1. Add this to a track and use the same values
1769157688846.png



2. Route the output of the SINE track to whichever output goes into your reamp box:

1769157782937.png


3. Add a new track which is armed for "recording" on your instrument input (gain at 0 on the interface) and connect a cable from your reamp box into this instrumnet input.
 

Attachments

  • 1769157656168.png
    1769157656168.png
    228.5 KB · Views: 7
I joined this forum hoping to have a kind of yes or no question regarding this topic. My feeling is that this is a very very opinionated tool setup, I hoped to have an engineering perspective (ideally someone involved in the audio interface design industry) to explain the reason of existence of gain knob for instrument/Hi-Z input, getting rid of that gain knob will settle the debate, right ? right ??
 
I joined this forum hoping to have a kind of yes or no question regarding this topic. My feeling is that this is a very very opinionated tool setup, I hoped to have an engineering perspective (ideally someone involved in the audio interface design industry) to explain the reason of existence of gain knob for instrument/Hi-Z input, getting rid of that gain knob will settle the debate, right ? right ??

I think all the information you need is in this thread already. If it doesn’t fully make sense yet, just take your time to go through it. There’s nothing contentious about it and the science is all well understood.

It’s not an A vs B subject - you have the information in front of you and you can approach it whichever way you like. The gain knob is there because there are more instruments in the world than just passive guitar pickups and it makes an interface functionally more versatile. Ever wondered why hardware modellers don’t require you to set your level to “just below clipping”?
 
thank you @MirrorProfiles for the kind words. That is why I always preferred to use my floor modeler instead of my computer, that decision is taken care for me.
Thank you all for your patience with me.
Take care.
 
In REAPER:
1. Add this to a track and use the same values
View attachment 58278


2. Route the output of the SINE track to whichever output goes into your reamp box:

View attachment 58279

3. Add a new track which is armed for "recording" on your instrument input (gain at 0 on the interface) and connect a cable from your reamp box into this instrumnet input.
sir thank you very very much this is good info for reaper users but i am using logic pro
how am i going use audient sono as reamp box there is amp out
 
sir thank you very very much this is good info for reaper users but i am using logic pro
Well, you could download Reaper since it has a free evaluation-mode if you just want to measure that stuff & uninstall after; or figure out how to do it in Logic.
 
how am i going use audient sono as reamp box there is amp out
the amp out carries your dry signal to anywhere you want this means reamping ??
 
In REAPER:
1. Add this to a track and use the same values
View attachment 58278


2. Route the output of the SINE track to whichever output goes into your reamp box:

View attachment 58279

3. Add a new track which is armed for "recording" on your instrument input (gain at 0 on the interface) and connect a cable from your reamp box into this instrumnet input.

in the third step "add a new track" but i see only one track from beginning to end?
and if i turn up the knob on the interface where i will see the gain values
 
in the third step "add a new track" but i see only one track from beginning to end?
Just right-click and select "Insert New Track".

and if i turn up the knob on the interface where i will see the gain values


That's the purpose you're doing this for isn't it though?
You route out that 0 dBFS signal from the DAW and measure it with a voltmeter to get the Vrms value.
Then you plug the cable into the instrument input and take note of the signal strength you see on that track & follow @MirrorProfiles guide / conversion to figure out your input headroom.
 
@yeky83 - please deal with the argument rather than poo emoji's. If I have something wrong, educate me. But everything I've read, my reasoning checks out.

Optimizing the ADC and DAC as a pair - like Fractal do - is a different thing, btw.
 
Yes, if you record a signal very quietly and then apply a lot of gain later, you raise everything: signal, source noise, and ADC noise. That part is not in dispute.
Right.

The real question is whether ADC noise is ever the limiting factor in a modern guitar DI into interface into amp sim workflow. In practice, it almost never is.
Not a limiting factor but you can "improve things" by proper gain staging as my example above shows.

Modern interfaces have ADC noise floors around -110 to -120 dBFS. A real electric guitar DI, especially a low output single coil, typically has a noise floor much higher than that, often around -70 to -80 dBFS once pickup noise, hum, cables, and EMI are included. That means the guitar is already 30 to 40 dB noisier than the converter.
Right, but the ADC noise is Wideband (hisssss) and the guitar noise is generally not, so it won't dominate through the whole spectrum. This hiss becomes audible once its amplified enough as with a lead type of amp model in the digital domain.

When you add digital gain, the relative relationship does not change. The guitar signal and its own noise remain dominant, and the ADC noise stays buried underneath. Digital gain does not reshuffle which noise source is dominant.
Correct. But there's nuance here.

The noise of the guitar input signal is not typically wideband and uniform. Furthermore, we use our guitar volume knobs and we are not always strumming the guitar at max level 100% of the time. That's what I tried to show in my example, that if you do want to squeeze out that extra bit of performance, it is still relevant to properly gain stage as you can still improve things.

The old record as hot as possible advice made sense with 16 bit converters, early ADCs, and tape workflows. In modern 24 bit systems with roughly 120 dB of real dynamic range, it no longer applies in the same way.
For the most part sure, you can still extract a bit of performance that is perceptible audible in the context of amp modeling, as again, I showed in my example.

This is not an argument for recording extremely quietly. The sensible modern advice is simply to record at a reasonable level with plenty of headroom and avoid clipping.
Agree.

For guitar DI, peaks around -18 to -12 dBFS are perfectly fine. Recording hotter than that does not improve sound quality or signal to noise ratio, it just reduces headroom.
Agree on SNR. I disagree on "does not improve sound quality" - if I reduce the amount of perceivable and audible hiss (ADC noise) I personally would consider that improving sound quality, as again I've shown in my example.

If a system is linear and time-invariant, gain multiplies signal and noise equally. Therefore, gain cannot improve the signal-to-noise ratio of the source.
Correct, but I don't think anyone in this thread is saying the input source SNR will be improved?

Optimizing the ADC and DAC as a pair - like Fractal do - is a different thing, btw.
You mean preserving unity gain on some inputs? Sure.

However, for input 1 In the Axe-Fx III, you can adjust the "Input A/D Sensitivity" which purpose is to do exactly what I'm describing - increase analog pre-amp gain before the A/D so as to optimize the frontend to your source instrument so that you improve the noise performance (decrease potentially audible hiss once you use a gainer amp model).
It compensates in the digital domain so that it remains calibrated - the same as I do manually with my interface and the plugin's input gain in the digital domain (once I have a mapping of dBu to dbFS).
 
Last edited:
I joined this forum hoping to have a kind of yes or no question regarding this topic. My feeling is that this is a very very opinionated tool setup, I hoped to have an engineering perspective (ideally someone involved in the audio interface design industry) to explain the reason of existence of gain knob for instrument/Hi-Z input, getting rid of that gain knob will settle the debate, right ? right ??
Why would they get rid of it? The whole point is for the user to adjust to your source - the interface doesn't know what you are connecting (think low output single coil pickups for example )
 
Last edited:
Hello @2dor , @MirrorProfiles and @volkan
I created an account just to ask my question because I share @volkan confusion.
TL;DR question:
- I am quoting @2dor critical information which is the basis of this whole calibration stuff; the plugin author (here for instance Neural DSP) provide a calibration figure based on THEIR audio interface (12.2 dbu in this case). Are they also setting their audio interface gain knob to 0 ?
I went over this in the Mayer plugin thread - but the main issue here is that plugins have no idea what the mapping from real world voltage level to digital level is because audio interfaces all differ in their circuit design, converter choice, etc.

With amp modeling you need to have some sense of what voltage the digital signal represents, so they will assume a default mapping. Allegedly for NeuralDSP plugins that mapping is a 12.2 dBu analog signal corresponds to a 0dBFS digital signal in the digital domain - that's an ASSUMPTION by the plugin.

With NAM plugin - you can change this mapping directly in the calibration settings.

Other plugins assume other mappings.

Now, you as a user, what's the easiest way to know what the mapping is for your particular interface? Well, most audio interfaces specify in their documentation, their maximum input dBu analog level at the lowest gain setting. In other words, they are telling you, if you input a <whatever their max dbU spec is> signal - that results in a 0dbFS in the digital domain.

This is where the "lowest gain" thing came from - because it's easy then for you to know the default mapping without measuring anything.

It's not that NDSP sets "their "interface to the lowest gain.


Elaborated question:
This whole mess of "set your audio interface input gain to zero" started from community, not from plugin devs (to my knowledge), they also recommend (like Scuffhamamps and many) to set their audio interface gain enough to avoid clipping and maximising SNR.
Yes, as said above, mainly to avoid having to measure anything.

If this has started from plugin Dev to ensure the most homogeneous user experience they would have clearly state that we authors of this plugin we set our audio interface gain to zero to have a reference based on the audio interface (used during development) headroom at zero gain, and we recommend you (users) to do the same on your audio interface that we are on the same page.
Even if they devs said that -it would still not be homogenous unless you used the exact same interface. Audio interface all have different specs.
What would help is for plugin devs to do the same thing as the NAM plugin to start and just allow you to easily put your own custom mapping of dBu => 0dBFS

In order for this to become transparent to users - there would have to be an audio API to query that signal level mapping from the interface itself (and have that mapping updated as you change input gain as well)

because of the lack of this statement from plugin dev, I speculate about 2 options:
option 1- they also set their audio interface gain in a way to maximise SNR and avoid clipping but provide the audio interface headroom at gain zero just for (but worthless) reference. Thus our reference point that we consider golden is actually wrong, and we are all mislead by the plugin calibration data.
As said above, they need to assume SOME mapping - for example if you are doing component modeling, you would like to know what voltage level your model is getting, a digital signal level does not provide that information - so they assume a mapping. Potentially, they may have arrived at that default mapping by averaging the input specs of all current popular audio interfaces.

option 2- Plugin Devs in general have a special/high-end audio interfaces that allow them to still maximising the SNR ration by turning the gain knob + provide a readout of the headroom (i.e maximum input ) left at that gain setting (essentially an automated way of what @MirrorProfiles is suggesting with the sine-wave method). and then this calibration method at gain set to 0 of us end-users is still valid.
Calibration is calibration - the gain at 0 its a shortcut for users to avoid having to measure anything and just read their interface spec sheets to obtain the correct mapping for their interfaces. If your input gain knob is digitally controlled and the interface reports how many dBs is adding then you can also just adjust your calibration by that amount without having to measure.

But the most concrete way to get this mapping for your particular interface at whatever input gain you are using is, well to just measure it.

Hope I expressed well my question (English in not my main language, not even the second).
kindly,
LiCoRn
Hope the explanations above help.
 
Correct, but I don't think anyone in this thread is saying the input source SNR will be improved?
Most people who say record as hot as possible, do in fact think that. Which is what I was originally combating.

High-gain guitar exposing wideband ADC noise does not mean that recording hotter improves source SNR, resolution, or “uses more bits”. The system remains linear up to the ADC, and gain still multiplies signal and noise equally.

What it does mean is that in a nonlinear, high-gain amp-modeling chain, the spectral character of residual noise matters perceptually. Wideband ADC hiss can become audible after heavy distortion in situations where guitar noise, which is spectrally shaped and level-dependent, is less intrusive.

Optimising analogue gain before the ADC can therefore reduce the audibility of that wideband hiss by pushing it further below the nonlinear stages - not by improving SNR, but by reducing how much ADC noise is presented to those stages in the first place.

I will accept that the above could be construed as "improving sound quality" but this is not a defense of "record as hot as possible" at all. It is a defense of what I've been saying - use converters that don't exacerbate the issue as much.

A guitarist should still record at sensible levels with headroom. The point is that some converters simply behave better than others once you start applying large amounts of downstream digital gain and distortion.
 
Another video with an example of the noise improvements by turning your preamp up so your signal reaches 0dBFS.

I know there are caveats and exceptions, but I purposefully used a £100>Interface with a bog standard Telecaster and Les Paul Traditional. That Ghost Note Audio video has all kinds of misinformation in the comments and there's been a load of discussion again recently about it. Its easy enough for anyone to test for themselves, and if you get an improvement, do what you gotta do.

 
That’s what I don’t understand with this whole thing. Is it that difficult to use your ears for it? I went with the ”goose it until it just clips” mentality at first since it was the wide spread truth at the time (guess still kind of is). I could never get a nice clean sound and bought a modeler instead. Then when I got educated by reading up on it on this forum (some damn good insights, knowledge sharing and expertise here so thank you!) I tested the proposed solution based on the specs of my interface and it immediately just sounded much better. And I can’t say I have any additional issues with noise either compared to using a Helix HW modeler. If you do then it’s a different story that warrants troubleshooting the problem. Or am I totally missing some important aspects here?

TLDR: are there issues with using your ears to do gain staging that I don’t understand? The info is there, why not try it out?
 
You mean preserving unity gain on some inputs? Sure.
BTW actually, no. That isn't what I meant.

Fractal allow you to perform a boost/pad operation on select outputs. This boosts the signal going into the ADC, and pads the signal on the DAC. Without the pad at the DAC, you wouldn't actually reduce noise at all. You would just sacrifice headroom, making any existing ADC noise less observable; but all the same, still there.

In such a situation, it is the attenuation that pushes the noise floor down; again at the cost of headroom. You can easily clip the ADC's if you boost too much.

I definitely notice that the Axe3 lowers noise when I boost/pad for output 3. It is particularly helpful for 4-cable-method with real amps. But that's quite a different situation versus plugging your guitar straight into a Hi-Z input. It is also worth recognising that without the corresponding pad, boosting the ADC would not appreciably reduce noise. It would in fact boost the source noise, which masks out the ADC noise anyway.

Now personally, I think headroom is important. I usually aim for about -12 to -10dBFS on my meters in a DAW, and funnily enough, just setting my RME UFXII Hi-Z gain to the minimum value (which is actually +8dB on RME interfaces) gets me exactly that. Which means when reamping later on, there's no guesswork.

Another video with an example of the noise improvements by turning your preamp up so your signal reaches 0dBFS.

I know there are caveats and exceptions, but I purposefully used a £100>Interface with a bog standard Telecaster and Les Paul Traditional. That Ghost Note Audio video has all kinds of misinformation in the comments and there's been a load of discussion again recently about it. Its easy enough for anyone to test for themselves, and if you get an improvement, do what you gotta do.


Precisement.
 
Back
Top