Kemper Profiler MK 2

  • Thread starter Thread starter Deleted member 490
  • Start date Start date
I do believe that null files aren't completely useless. Just listening to them gives you a ballpark idea of what's going on. Presenting them as a single summary LUFS figure is comical.
What should be relayed is that if the modeller is applying more or less inherent gain or compression than the original then this is going to have a big impact on the null file. Even if everything is almost perfect then compression/sustain alone is going to skew the result.
Having said all that, if the the null file were completely audibly silent cranked up, then it's near perfect.
 
Last edited:
Null tests are not useless if done well. The LUFS score is useless, and Leo's tests are not done well, so you get a useless score based on a poorly done test. It is meaningless.
 
Seems like the only way to measure captures would involve another round of captures. You need frequency response over time and with varying dynamics. I honestly don't know if the juice is worth the squeeze there.

For Kemper I'd settle for just having them release the damn thing and see if it's actually a noticeable difference.

And if there is a big improvement, I honestly wouldn't be surprised if the Mk1 profiles created by the new capture method are extremely close to the Mk2 profiles. We saw from both Tonex and QC that there were improvements to be made on the same hardware platform using different capture processing tech. Kemper Mk1 vs Mk2 has the same DSP chip right?
 
LUFS has nothing to do with "left-over-audible-differences"

I wouldn't go that far. It's a valid way to do a simple measurement of the discrepancy. It has shortcomings of course and some people will argue it's too simplistic, but if you want a single scalar value to measure the difference, it's about as good a measurement as you'll find. It has to be done properly and Leo does a reasonable job, including correlation analysis. Anything better will necessarily produce a result more complicated than a single value. Considering the audience for these tests, "more complicated" probably isn't a good idea :giggle: .
 
I wouldn't go that far. It's a valid way to do a simple measurement of the discrepancy. It has shortcomings of course and some people will argue it's too simplistic, but if you want a single scalar value to measure the difference, it's about as good a measurement as you'll find. It has to be done properly and Leo does a reasonable job, including correlation analysis. Anything better will necessarily produce a result more complicated than a single value. Considering the audience for these tests, "more complicated" probably isn't a good idea :giggle: .
IMO, not only is using LUFS just inherently flawed (and not really able to show anything particularly relevant for this kind of thing) but most users ability to decipher that information is also very flawed, and going to lead to all kinds of incorrect conclusions.

I think a blind test (or even better, series of them) work much much better.
 
IMO, not only is using LUFS just inherently flawed (and not really able to show anything particularly relevant for this kind of thing) but most users ability to decipher that information is also very flawed, and going to lead to all kinds of incorrect conclusions.

I think a blind test (or even better, series of them) work much much better.
I would think there has to be an accurate way to have a data point for compares accuracy. I don’t think LUFS is it, isn’t there some sort of “human hearing” curve to LUFS? I also don’t think a blind test is a good way to make a determination on the actual accuracy because it also relies on the accuracy of the ears listening.
 
I would think there has to be an accurate way to have a data point for compares accuracy. I don’t think LUFS is it, isn’t there some sort of “human hearing” curve to LUFS? I also don’t think a blind test is a good way to make a determination on the actual accuracy because it also relies on the accuracy of the ears listening.
LUFS is basically for delivery of audio for broadcast and is largely geared towards the perceived level of audio (so streaming songs sounds roughly the same level, or watching TV shows or movies etc). IMO people seem to use it as a blanket metric for all kinds of things with audio where it's not really appropriate. And aside from the fact that 2 pieces of music can have similar LUFS levels but sound totally different in volume, for measuring the residual in a null test, again its not really saying much about what is and isn't left behind. I think our ears could perceive something to sound indistinguishable to something, and yet could give poor null test results. I think there are certain qualities that our ears will be particularly sensitive to, and other areas where things could be more forgiving. That could be parts of the frequency domain, or it could be dynamics (both in certain frequencies and overall), or it could be other aspects of the audio.
 
Last edited:
I would think there has to be an accurate way to have a data point for compares accuracy. I don’t think LUFS is it, isn’t there some sort of “human hearing” curve to LUFS? I also don’t think a blind test is a good way to make a determination on the actual accuracy because it also relies on the accuracy of the ears listening.

Yes, LUFS has weighting to compensate for frequency-dependent loudness perception, which is a factor in its favor as a measurement technique since it gives more weight to differences you can hear. And yes, blind tests are always going to be subjective.

I think we can all agree that null tests in general, and LUFS measurement of null test results in particular, have problems, but I don't think anybody has come up with a better methodology that is both objective and reduces the results to a simple measurement result.
 
Someone needs to make a “plugin doctor” sort of thing for this that can handle dynamics, frequency, S/N etc all in one place just to keep this line discussion honest. I can see the usefulness of relying on perception to make a determination, but it really should be accurately measurable as well.
 
I wouldn't go that far. It's a valid way to do a simple measurement of the discrepancy. It has shortcomings of course and some people will argue it's too simplistic, but if you want a single scalar value to measure the difference, it's about as good a measurement as you'll find. It has to be done properly and Leo does a reasonable job, including correlation analysis. Anything better will necessarily produce a result more complicated than a single value. Considering the audience for these tests, "more complicated" probably isn't a good idea :giggle: .

It's not valid, it is a misuse of a metric. Yes, anything better is going to be more complicated than a single value because you can't even reduce frequency response to a single value and there is so much more going on with a capture. The idea that you could objectively rank them with a single metric is absolutely insane. The audience for Leo Gibson videos are people who don't understand, have no critical listening skills, and who want a score so they can cheer a brand or hate on a brand with some number to point to. It's beyond pathetic.
 
that is both objective and reduces the results to a simple measurement result.

This desire, and not understanding how horrifically flawed it is, would be the cause of the problem.

What is the better car? A Chevy pickup, a 3 row Honda SUV, or a Ferrari? I give the Ferrari a 9.7, the pickup a 6.2 and the SUV a 6.4 after a complicated set of objective tests and measurements. Do you think a farmer or a mom with 4 kids will agree with the results? You simply can't reduce complex differences into a single metric. Nor should you want to. It makes no sense and serves no one but Leo and his advertisers.
 
Someone needs to make a “plugin doctor” sort of thing for this that can handle dynamics, frequency, S/N etc all in one place just to keep this line discussion honest. I can see the usefulness of relying on perception to make a determination, but it really should be accurately measurable as well.

That's probably a good idea. Or at least someone should create a standardized procedure for doing a good null test. I think one problem with null tests is people ask "did you do this step or that step properly?" because they don't know if things like phase issues or normalization or baseline analysis were handled correctly. With a standardized procedure that could be referred to, it would help alleviate those concerns.
 
I wouldn't go that far. It's a valid way to do a simple measurement of the discrepancy. It has shortcomings of course and some people will argue it's too simplistic, but if you want a single scalar value to measure the difference, it's about as good a measurement as you'll find. It has to be done properly and Leo does a reasonable job, including correlation analysis. Anything better will necessarily produce a result more complicated than a single value. Considering the audience for these tests, "more complicated" probably isn't a good idea :giggle: .
I would go that far. LUFS is to do with loudness. It has some weighting to attempt to account for the human perception, but it is not the right tool for comparing amp tonality.

You need something based on the mel scale for that. IMO.

I would think there has to be an accurate way to have a data point for compares accuracy. I don’t think LUFS is it, isn’t there some sort of “human hearing” curve to LUFS? I also don’t think a blind test is a good way to make a determination on the actual accuracy because it also relies on the accuracy of the ears listening.
Yes. Read my article.

Someone needs to make a “plugin doctor” sort of thing for this that can handle dynamics, frequency, S/N etc all in one place just to keep this line discussion honest. I can see the usefulness of relying on perception to make a determination, but it really should be accurately measurable as well.
We have that. It is called Plugin Doctor.
 
I would go that far. LUFS is to do with loudness. It has some weighting to attempt to account for the human perception, but it is not the right tool for comparing amp tonality.

You need something based on the mel scale for that. IMO.


Yes. Read my article.


We have that. It is called Plugin Doctor.
Does that work for these types of comparisons?
 
I would. It's completely misleading, because people that don't understand the details, will just look at the that number at reach wrong conclusions.

Null tests and LUFS measurement of the results have well-known shortcomings, but I don't think there's much evidence that it's a misleading methodology when used for properly done real-world guitar amp capture comparisons. Most criticisms I've seen are rather theoretical in nature and don't really point to any cases where a properly done null test leads to incorrect or misleading conclusions.

You could argue it's hard to do a null test correctly, and I agree with you, but that's taking issue with the tester, not the test.

I haven't seen anything that improves on using integrated LUFS to objectively measure the results of the test. You want to know the magnitude of the discrepancy and LUFS is a pretty good way to measure that in a way that is meaningful for human hearing.
 
Back
Top