Project BIAS-X

The more I use it, the more impressed I am with the AI. AI will probably kill the human race, but in the meantime, it's pretty useful for dialing in tones. I can ask for tones from songs from obscure bands. I can tell it what guitar I'm using so it will make adjustments for my situation. It's also fun to put in crazy abstract descriptions to get interesting tones to jump start song ideas. This beats the hell out of scrolling through lists of IRs or amp models. I can ask ChatGPT to build an AxeFX preset for me, but having the AI integrated in the modeler UI like this works a lot better.
 
Yeah tried it - it's a weird mix of useful and limited. It also crashes when you try to modify your audio interface ASIO settings whilst the app is running :bonk

The AI is also very erratic. Half of my prompts it started to reply then just gave up. The tones it came up with from audio files were either almost there or so wide of the mark it was funny. It doesn't seem to be able to distinguish between ambient noise for e.g. and distorted guitar. Several times I uploaded a track by The Cure and it sh*t itself and sent me a massive death metal tone lol
So just like everything else AI you've seen so far basically, occasionally slightly useful, a lot of the time just plain wrong or puke inducing.
What I will say is that when it isn't crashing, the plugin can produce some inspiring tones, but then so can all the other plugins I use like Helix Native, TH-U, even Guitar Rig 7.

The UI is somewhat laggy (my PC is a beast and runs everything else fine, but dragging things around and adjusting settings etc. is clunky and laggy in Bias X).

They've also made some strange functional design choices, like lack of lo and hi cut on the cab modelling for e.g. You get a choice of 2 EQ effects, but that's not the same at all obv.

The amp modelling is decent to my ears though. It does have a slightly sterile quality on some of the models, but I've encountered that elsewhere so not too worried about that - I can always use that specific amp model elsewhere if I really need to.

I think it's a bummer they didn't take the opportunity to add NAM support or even their own proprietary capture process, because that massively extends the range of amp tones available.

The effects are pretty good, with most stuff you need, but again, not exactly boutique, and some of the effects modelling is way off imo. The fuzzes are just wrong, and a couple of the delays are just a bit meh as well.

Overall, maybe wait until it's a more mature product with more options and stability improvements.
 
If we're calling what this thing does 'AI,' then it's officially been reduced to a marketing term and nothing else, and we'll need to come up with something different to mean what that term used to mean. It's just code. That's all.
 
"AI" is the correct term for BiasX, in its usual sense to mean a natural language interface to an LLM.
In its usual sense as a marketing term. There is a lot more in the AI space than LLMs, and more to the point, an LLM that cannot learn in real time (but need to have its weights reset through large periodic dumps) is a car fry from the core of the AI field. It's just code. It's feeding prompts to an LLM, which probably has been fed a small set of data to establish weights that map to defined tone profiles.

I've built large scale research infrastructure for real machine learnign applications in research. This isn't that. In this use, it is a marketing term. That possibly becoming the 'usual sense' in which it is used doesn't change that reality. All kinds of inaccurate renderings of various technologies and ideas become common in marketing.
 
I figured that was what you meant, so I deleted my comment. But the fact remains, the way "AI" is used in common parlance accurately describes BiasX. For example, it's quite common to refer to ChatGPT as "AI", and I think you'd have a hard time finding many people who would object to that.
 
I figured that was what you meant, so I deleted my comment. But the fact remains, the way "AI" is used in common parlance accurately describes BiasX. For example, it's quite common to refer to ChatGPT as "AI", and I think you'd have a hard time finding many people who would object to that.
The LLM itself is different from using a few API calls to make the LLM an interface to static data.

And I have no trouble finding people who say LLMs are not actually AI, and they're all heavy machine learning engineers and researchers.

But yes, the marketing term has become the common use - and it will keep doing that, at least until the massive bubble that's currently inflating pops like crazy. And oh, is it ever going to pop. When the execs currently pushing the stuff state that it's a bubble, you know it's bad.

I also know I'm tilting at a windmill a bit here. I need to let it go outside of professional context. Silently rolling my eyes is getting easier as I age, but I clearly haven't mastered it yet.
 
Last edited:
I used to cringe a bit at that too, but I got over it. Terminology aside, the important thing is this feature is novel among amp modelers and IMHO quite useful.
 
It seems we are moving toward an outcome where the next generation of guitar player will be waiting on statistics to tell them if they sound good or not.
Then the forum debates won’t be about tubes vs digital , or Kemper vs NAM etc. but will instead be which algorithm is the best one to trust when deciding if you like your tonez!

So it wasn’t video that killed the radio star. That was just a warning shot.
I just don’t think that’s gonna happen, despite how much the industry will try and push this shit. The players that take their hobby seriously will not rely solely on AI, the same as those of us now don’t rely on presets

Edit: I use AI (Gemini cuz my work pays for it) a lot in my job (IT) it’s like a Google search, you don’t just take something for face value, it’s a powerful tool. In the hands of someone incompetent, it’s not that great. Give it to someone that knows what they’re doing and only needs a nudge in the right direction when they’re in a tough spot? AI is a godsend
 
So in the context of the last few posts on this thread, how would you define what, say, NAM does (by contrast)?

NAM uses machine learning to capture an amp. BiasX does not do that.

BiasX uses a conversational interface that you can use to dial in a preset. NAM does not do that.

Are you asking do they both use AI? IMHO, unless you want to be pedantic, the answer is "yes".
 
NAM uses machine learning to capture an amp. BiasX does not do that.

BiasX uses a conversational interface that you can use to dial in a preset. NAM does not do that.

Are you asking do they both use AI? IMHO, unless you want to be pedantic, the answer is "yes".
No, I was asking if anyone can explain in a bit more detail what the difference is between the machine learning system used by NAM and the "AI" system that Bias X uses (the apparently pre-trained or static, non-learning system some people on this thread have suggested it uses). Genuine question. Some people on here seem to have some background in machine learning.
 
NAM mainly uses Wavenet which is a type of CNN (convolutional neural network) originally developed by DeepMind for audio/speech generation. Steve Atkinson (NAM's author) found the application of WaveNet to be surprisingly effective at learning the non-linear and time-dependent behavior of tube guitar amplifiers or pedals.



Presumably, Bias-X is using LLM (large language models)
 
Back
Top