Project BIAS-X

The more I use it, the more impressed I am with the AI. AI will probably kill the human race, but in the meantime, it's pretty useful for dialing in tones. I can ask for tones from songs from obscure bands. I can tell it what guitar I'm using so it will make adjustments for my situation. It's also fun to put in crazy abstract descriptions to get interesting tones to jump start song ideas. This beats the hell out of scrolling through lists of IRs or amp models. I can ask ChatGPT to build an AxeFX preset for me, but having the AI integrated in the modeler UI like this works a lot better.
 
  • Like
Reactions: Dez
Yeah tried it - it's a weird mix of useful and limited. It also crashes when you try to modify your audio interface ASIO settings whilst the app is running :bonk

The AI is also very erratic. Half of my prompts it started to reply then just gave up. The tones it came up with from audio files were either almost there or so wide of the mark it was funny. It doesn't seem to be able to distinguish between ambient noise for e.g. and distorted guitar. Several times I uploaded a track by The Cure and it sh*t itself and sent me a massive death metal tone lol
So just like everything else AI you've seen so far basically, occasionally slightly useful, a lot of the time just plain wrong or puke inducing.
What I will say is that when it isn't crashing, the plugin can produce some inspiring tones, but then so can all the other plugins I use like Helix Native, TH-U, even Guitar Rig 7.

The UI is somewhat laggy (my PC is a beast and runs everything else fine, but dragging things around and adjusting settings etc. is clunky and laggy in Bias X).

They've also made some strange functional design choices, like lack of lo and hi cut on the cab modelling for e.g. You get a choice of 2 EQ effects, but that's not the same at all obv.

The amp modelling is decent to my ears though. It does have a slightly sterile quality on some of the models, but I've encountered that elsewhere so not too worried about that - I can always use that specific amp model elsewhere if I really need to.

I think it's a bummer they didn't take the opportunity to add NAM support or even their own proprietary capture process, because that massively extends the range of amp tones available.

The effects are pretty good, with most stuff you need, but again, not exactly boutique, and some of the effects modelling is way off imo. The fuzzes are just wrong, and a couple of the delays are just a bit meh as well.

Overall, maybe wait until it's a more mature product with more options and stability improvements.
 
I figured that was what you meant, so I deleted my comment. But the fact remains, the way "AI" is used in common parlance accurately describes BiasX. For example, it's quite common to refer to ChatGPT as "AI", and I think you'd have a hard time finding many people who would object to that.
 
I used to cringe a bit at that too, but I got over it. Terminology aside, the important thing is this feature is novel among amp modelers and IMHO quite useful.
 
It seems we are moving toward an outcome where the next generation of guitar player will be waiting on statistics to tell them if they sound good or not.
Then the forum debates won’t be about tubes vs digital , or Kemper vs NAM etc. but will instead be which algorithm is the best one to trust when deciding if you like your tonez!

So it wasn’t video that killed the radio star. That was just a warning shot.
I just don’t think that’s gonna happen, despite how much the industry will try and push this shit. The players that take their hobby seriously will not rely solely on AI, the same as those of us now don’t rely on presets

Edit: I use AI (Gemini cuz my work pays for it) a lot in my job (IT) it’s like a Google search, you don’t just take something for face value, it’s a powerful tool. In the hands of someone incompetent, it’s not that great. Give it to someone that knows what they’re doing and only needs a nudge in the right direction when they’re in a tough spot? AI is a godsend
 
So in the context of the last few posts on this thread, how would you define what, say, NAM does (by contrast)?

NAM uses machine learning to capture an amp. BiasX does not do that.

BiasX uses a conversational interface that you can use to dial in a preset. NAM does not do that.

Are you asking do they both use AI? IMHO, unless you want to be pedantic, the answer is "yes".
 
NAM uses machine learning to capture an amp. BiasX does not do that.

BiasX uses a conversational interface that you can use to dial in a preset. NAM does not do that.

Are you asking do they both use AI? IMHO, unless you want to be pedantic, the answer is "yes".
No, I was asking if anyone can explain in a bit more detail what the difference is between the machine learning system used by NAM and the "AI" system that Bias X uses (the apparently pre-trained or static, non-learning system some people on this thread have suggested it uses). Genuine question. Some people on here seem to have some background in machine learning.
 
NAM mainly uses Wavenet which is a type of CNN (convolutional neural network) originally developed by DeepMind for audio/speech generation. Steve Atkinson (NAM's author) found the application of WaveNet to be surprisingly effective at learning the non-linear and time-dependent behavior of tube guitar amplifiers or pedals.



Presumably, Bias-X is using LLM (large language models)
 
Back
Top