Sascha Franck
Goatlord
- Messages
- 10,143
Whenever there's a sufficiently complex amp with highly interactive controls, the whole capture thing falls apart
Depends on whether you need all those tonal options. Most people I know, don't, even Mk owners.
Whenever there's a sufficiently complex amp with highly interactive controls, the whole capture thing falls apart
It's not whether you need all the options. It's which options. Hence why it's difficult to buy third party caps, as opposed to owning the amp and the profiler. For example, my gain/EQ/GEQ/mode switch settings won't be the same as yours, or anyone else's. Also, I dial the amp in differently, depending upon what guitar is used.Depends on whether you need all those tonal options. Most people I know, don't, even Mk owners.
I don't know if this will ever happen, but if one capturing product comes out victorious and we end up with a de-facto standard then yes, i can totally see captures going the way of IRs.
NAM is likely the best candidate. Becoming an industry standard comes with drawbacks though; namely, the fact that protocols evolve (much) slower than platforms, so improvements on the NAM core would be quite difficult to materialize.
much more complex/costly measurement paradigms quickly (i.e. shooting a cap for 'every' knob position on an an amp, taking measurements, etc).
I dunno... I guess that would be cool... but I don't see any reason to think it'd yield better results than a component model. It might be a good approach to making profiles 'less sucky' to manipulate after the fact. Going much beyond basic frequency response effects, there are going to be a lot of technical problems with manipulating the profile in a realistic way; even with a component model of the tone stack and other pieces.That's not what will happen. IMHO what will happen is the merging of component based modeling with front side profiling/capturing. Clearly it won't work for everything immediately, but having the user input metadata into a system would allow the algorithm to select an appropriate backend model (or even build one on the fly) and mirror the input control settings. Theoretically the controls would match real world counterparts.
For example, user inputs JCM 800, bass 4, middle 8, treble 8, Master 10. The underlying engine selects the model with the controls at the given positions and then does a tone match to create an IR on the fly. But what if it's an amp the engine's never seen? You look up the specs and input them: 5X12AX7 gain stages, Mesa tone stack, 2XEL-84 power amp, etc.
There's no interpolation here. What I'm essentially talking about is an extension of Fractal's Tone Match capability. Honestly, if the analysis is good enough it could potentially be smart enough to set the gain by itself, EQ would absolutely need to be manually input due to the mic type/position and room having so much influence on the toneI dunno... I guess that would be cool... but I don't see any reason to think it'd yield better results than a component model. It might be a good approach to making profiles 'less sucky' to manipulate after the fact. Going much beyond basic frequency response effects, there are going to be a lot of technical problems with manipulating the profile in a realistic way; even with a component model of the tone stack and other pieces.
In solving those problems, you are basically going to end up building a legit component model and using it to interpolate changes to the capture. And if you have good component model, WTF do you need the capture for... The developer might be likely better off shooting a bajillion captures and trying to morph between them or something. Maybe have your 'AI' listen to you sweep the knobs while a test signal plays or something.
It's also pretty naive to think you could just put in a little bit of data like that and characterize some unknown amp. And it would be delusional to think a significant number of users on some exchange like tone.net or similar are going to provide all the metadata about their caps, etc. Some commercial folks would be willing to do so but I think, like IRs, there's never going to be much money in this market and you are not going to see any kind of industry rally around that. Everyone's going to try to monetize their own thing and the whole market is small potatoes.
I dunno... I guess that would be cool... but I don't see any reason to think it'd yield better results than a component model. It might be a good approach to making profiles 'less sucky' to manipulate after the fact. Going much beyond basic frequency response effects, there are going to be a lot of technical problems with manipulating the profile in a realistic way; even with a component model of the tone stack and other pieces.
In solving those problems, you are basically going to end up building a legit component model and using it to interpolate changes to the capture. And if you have good component model, WTF do you need the capture for... The developer might be likely better off shooting a bajillion captures and trying to morph between them or something. Maybe have your 'AI' listen to you sweep the knobs while a test signal plays or something.
It's also pretty naive to think you could just put in a little bit of data like that and characterize some unknown amp. And it would be delusional to think a significant number of users on some exchange like tone.net or similar are going to provide all the metadata about their caps, etc. Some commercial folks would be willing to do so but I think, like IRs, there's never going to be much money in this market and you are not going to see any kind of industry rally around that. Everyone's going to try to monetize their own thing and the whole market is small potatoes.
I continue to have zero desire to keep buying packs, just to get what I want. It's the "in-app purchase" of the gear world, and it sucks.There's no interpolation here. What I'm essentially talking about is an extension of Fractal's Tone Match capability. Honestly, if the analysis is good enough it could potentially be smart enough to set the gain by itself, EQ would absolutely need to be manually input due to the mic type/position and room having so much influence on the tone
And based on my experience with the Kemper, I have absolutely zero interest in the crowd sourced concept here and way more interest in known producers/artists selling packs. And I say that as someone who generally feels buying presets is a waste of money.
Going against the grain. [...]
I nothing invested here, other than owning a bunch of real amps and some older-tech modelers, a ton of rack gear, pedals, and all sorts of s**t that this "AI" tech will eventually make "obsolete". I've never bought gear as an investment.
You could've said all this about any modern technology, especially once it comes to digital.
Computers render tape machines a worthless pile of junk - true. Plugins render many HW synths obsolete - true. Plugins render many outboard FX obsolete - true. Etc. Be it so.
They're all just some of the major steps in the democratization of music production. And I applaud to that as much as it gets.
And fwiw, I'm saying that as someone who had spent his last dime on f*cking expensive tapes for a mere Fostex G-16. Or another truckload of cash for a mere Atari ST. Heck, the update from 1 to 2 MB (yes, MB!) of RAM, just so I could run Cubase's Score modul, cost me more than a very decent PC these days.
I neither miss these old days (well, I partially do, but for other reasons) nor did I ever expect to get any of my money back.
In fact, I absolutely love being able to more or less fully enjoy all this modern technology.
Sorry Pal, not sure if you're trying to argue with me or agree with me![]()
Sorry Pal, not sure if you're trying to argue with me or agree with me