Gee whiz, modeling is dead!

100% agree component modeling is dead, take a listen to how terrible this Van Halen tone is
any Profiler is by far much more accurate than component modeling can ever hope to achieve
and yes @JiveTurkey it is time stamped :D



Ill let you all be the judge

Cheers
Mike
 
I think they are asking (begging) Line 6 for work, being the most widespread modeler with most units in the wild.
Why else would they keep producing those videos?
Are they not getting enough sales form the currently available capturing platforms?

Captures/profiles will NEVER be as versatile or fun as a full component models.
Take a Friedman (or a model of it) for example, HBE, Fat, Saturation, C45, Bright switches, Tone Stack, and most importantly the Master volume; a single amp/model is a very versatile and personal experience, each individual will tweak it differently and end up with different sounds for different instruments, even with something simple like a 59 Bassman or a Plexi you have infinitely more tones than a "freeze frame" capture.
 
I think they are asking (begging) Line 6 for work, being the most widespread modeler with most units in the wild.
Why else would they keep producing those videos?
Are they not getting enough sales form the currently available capturing platforms?

Captures/profiles will NEVER be as versatile or fun as a full component models.
Take a Friedman (or a model of it) for example, HBE, Fat, Saturation, C45, Bright switches, Tone Stack, and most importantly the Master volume; a single amp/model is a very versatile and personal experience, each individual will tweak it differently and end up with different sounds for different instruments, even with something simple like a 59 Bassman or a Plexi you have infinitely more tones than a "freeze frame" capture.
Def one of the most fun amp models to really “play” with in helix… snapshot’ing settings and such. Even makes pops and crackles when doing that. Annoying… yes.. Realistic… I’d guess so.

When looking at a stomp (for example) having that amp and all its settings and being able to assign those settings to “pedals”, it’s just fun stuff. Like having the better version of the amp itself.
 
Def one of the most fun amp models to really “play” with
It's part of the fun and versatility of amps, twisting, tweaking, adjusting, it's a personal experience.
None of that with captures.

HW says that The Edge (U2) doesn't change his Vox settings between live performances and let the the mixing engineer tweak the EQ on the mixing board like the post EQ you get with capture devices, that's HW argument of why profiles/captures are enough.
His argument is completely moronic and equivalent to a straw house.
 
His argument is completely moronic and equivalent to a straw house.
he lives in his own bubble, thinking everyone wants or need that.
The most badass thing about HW is that he acts like he has forum and social media people in his palm, taking care of everyone and teaching them his sublime knowledge. Makes me want to puke. I watched a few videos with him and realized nothing that he says applies to my own interests.
But if he is beneficial and helpful to other people… good for them.
 
If anything, there should be an effort made (or invented) for making full component models faster.
As a side note, I personally think Line 6 "no fixing of existing models" approach is extremely outdated, some amps are screwed since day 1 and remain so for 8 years.
 
Last edited:
this is a false dichotomy, theres no reason all these switches cant be taken into account on a profiling system
isat a profile like a snapshot of setting, changing a setting in a static snapshot? is that possible? wouldn't it become a new snapshot with that setting and such...

i don't know, just asking.
 
If anything, there should be an effort made (or invented) for making full component models faster.
As a side note, I personally think Line 6 "no fixing of existing models" approach is extremely outdated, some amps are screwed since day 1 and remain so for 8 years.
I would assume for Fractal it's already more of a "how long it takes to measure and verify all this stuff" rather than "how long it takes to program the model" question. I would expect they already have the building blocks found in most amps so it's just a question of configuring it to behave like a particular amp. They have churned out Marshall and Tube Screamer models pretty fast for example.

With captures it's certainly a benefit that the capturing system is entirely agnostic to the concept of an amp. The problem is that since it would need to capture snapshots of every possible permutation of controls it would take forever to do all those captures at least with the types of test tones capture devices use atm.

So hybrid solutions are more likely to be seen where underneath it's a capture and then the user interface just gets better at keeping track of what the real amp was doing and replicating that with EQ solutions.
 
hybrid solutions are more likely to be seen
I have been suggesting this in the past and I think that's what NDSP are doing.
Essentially a capture of the preamp before the tonestack, then component based Tonestack and Poweramp, or any segment/s of the amp that doesn't have user changeable parameters (RLC networks) can be isolated and captured from the real amp.
 
Can we make it a forum rule that instead of debating the merits of meritless videos, we just fill the thread with fart videos? There’s, literally, more worthwhile information in a 20 second fart video than there is in the video this thread is about.
 
I’ve wondered about this - rather than machine learning to be used to brute force copy the sound of an amp, using machine learning to code algorithms that bridge any differences between what the schematic SHOULD do and what the real amps actually do.

I think a mixture of white box and black box modelling has been how UAD and other plugin emulation companies have done things for some time.

Relab have posted a bit about how they’ve used their own machine learning to reverse engineer complex reverb algorithms to the point where they are 1:1 to the originals
 
I’ve wondered about this - rather than machine learning to be used to brute force copy the sound of an amp, using machine learning to code algorithms that bridge any differences between what the schematic SHOULD do and what the real amps actually do.
Right, that's where captures or various black box transfer function solutions can do the job very accurately, but anything that changes the transfer function beside the input X(t) has to be excluded form that black box, in a guitar amp it's typically RLC and Feedback networks.
Then you have to consider if it's not easier to just do the whole thing in component modeling in the first place because we already have very accurate equation/component based models of tubes, transformers, power supplies, etc.

So in my opinion, if you want a fully functioning amp model true to the original amp, hybrid modeling is actually more time consuming than making a full component model.
 
Can we make it a forum rule that instead of debating the merits of meritless videos, we just fill the thread with fart videos? There’s, literally, more worthwhile information in a 20 second fart video than there is in the video this thread is about.
We could also upload our farts to Soundcloud and create another challenge. Let‘s see if we can outperform @Iron1 this time.
 
Trying to make a good conversation in a thread that went off the rails like 3 pages ago. :rofl
Girls Panda GIF
 
Back
Top