molul
Roadie
- Messages
- 983
If it's the one I tried long ago, the resulting file wouldn't load on Helix.![]()
ChatGPT - L6 Helix Sound Designer
Helps users craft detailed signal chains using Line 6 Helix models.chatgpt.com
If it's the one I tried long ago, the resulting file wouldn't load on Helix.![]()
ChatGPT - L6 Helix Sound Designer
Helps users craft detailed signal chains using Line 6 Helix models.chatgpt.com
I would add also being able to increase/decrease pitch (up to +/-3 maybe) keeping the tempo. Useful for having backing tracks of songs in sightly different tunings.@Digital Igloo - when the stem separation thing comes for Showcase….
Will we be able to slow the backing track down and maintain pitch? This would be fantastic for practice. And for me in general as my fingers often disappoint me :)
If we will be able to do this, will it be a sliding scale / percentage of original speed? Or a simpler 1/4 - 1/2 - 3/4 - full thing?
If it's the one I tried long ago, the resulting file wouldn't load on Helix.
If not, you can do that externally with an audio editor. Then import the slowed track.@Digital Igloo - when the stem separation thing comes for Showcase….
Will we be able to slow the backing track down and maintain pitch? This would be fantastic for practice. And for me in general as my fingers often disappoint me :)
If we will be able to do this, will it be a sliding scale / percentage of original speed? Or a simpler 1/4 - 1/2 - 3/4 - full thing?
That’s a cool request too. It would eliminate the need for a pitch block in the chain.I would add also being able to increase/decrease pitch (up to +/-3 maybe) keeping the tempo. Useful for having banking tracks of songs in sightly different tunings.
We're apt to be in the minority, but this one would be huge for me as well. To give some idea of where I am, I've been working diligently since June 1st, virtually every day, trying to learn "On Your Way Sweet Soul" by Andy Timmons strictly by ear, and I 100% rely on being able to slow down the audio to do it. Even with that, in 5 months of daily work I've only made it just past the halfway point of the song and can keep up at 65-70% of the recorded tempo. So it's something I'd use daily well into the foreseeable future. My musical ear is just slow. I've been using Transcribe! for years, and suppose I can go back and forth between it and Stadium/Showcase, but it would be a huge workflow boost if I could make those tempo adjustments in real time on Stadium. That would probably make Showcase the new feature on Stadium I used most, rather than something I just use occasionally. But I would completely understand if Line 6 didn't think that was an optimal use of their resources/too far afield from their core mission, especially at this early stage.@Digital Igloo - when the stem separation thing comes for Showcase….
Will we be able to slow the backing track down and maintain pitch? This would be fantastic for practice. And for me in general as my fingers often disappoint me :)
If we will be able to do this, will it be a sliding scale / percentage of original speed? Or a simpler 1/4 - 1/2 - 3/4 - full thing?
Fuck AI. With a cactus.Chatting with a friend who's waiting for his Stadium XL, we wonder if it would be possible, in the future, having a way to create presets with AI.
We know Proxy will use Line6 servers to process stuff sent from the devices, so an AI model trained with the Helix content could be able to receive a prompt, either from the device or your phone (editor app) or computer (editor or some website) and respond with a preset that has what the user asked (i.e. "I want a preset that sounds like Queens of the Stone Age "No one knows" guitar").
That would rock, not just for replicating certain guitar/bass tones, but for whatever crazy random stuff the AI could figure out.
If it's the one I tried long ago, the resulting file wouldn't load on Helix.
"ChatGPT, what would it sound like if I swiped to the Focus View zone in the upper right corner?"
I bet you would have said the same about the internet 30 years agoFuck AI. With a cactus.
Half useful for me then. I want to directly have the preset and then maybe tweaking it to my liking.This one doesn't create a patch, it just lists parameters.
Yep. I find myself pitching 1 semitone up many Smashing Pumpkins songs (especially from 1995 to 2000) for playing along with them, as they were recorded in Eb.That’s a cool request too. It would eliminate the need for a pitch block in the chain.
Build the preset from the given parameters then tweak it. It really doesn't take long.Half useful for me then. I want to directly have the preset and then maybe tweaking it to my liking.
We're apt to be in the minority, but this one would be huge for me as well. To give some idea of where I am, I've been working diligently since June 1st, virtually every day, trying to learn "On Your Way Sweet Soul" by Andy Timmons strictly by ear, and I 100% rely on being able to slow down the audio to do it. Even with that, in 5 months of daily work I've only made it just past the halfway point of the song and can keep up at 65-70% of the recorded tempo. So it's something I'd use daily well into the foreseeable future. My musical ear is just slow. I've been using Transcribe! for years, and suppose I can go back and forth between it and Stadium/Showcase, but it would be a huge workflow boost if I could make those tempo adjustments in real time on Stadium. That would probably make Showcase the new feature on Stadium I used most, rather than something I just use occasionally. But I would completely understand if Line 6 didn't think that was an optimal use of their resources/too far afield from their core mission, especially at this early stage.
Obvious. I know I can do that. I'm just saying that this process could be quite faster with a proper AI implementation.Build the preset from the given parameters then tweak it. It really doesn't take long.
Yep. In my job (developer) AI has been the biggest performance booster ever. Being able to make things in 1/10th the time is awesome.For what it's worth, I don't have a problem if Line 6 incorporates some form of AI into Stadium's future plans. It's commoditized now, and the expectations of it being in your product are hard to avoid. I'm kicking off a long push for the SaaS company I run content marketing for precisely to inform prospects and existing customers of how we'll use it to make their usage easier.
I'm expected to use it daily within my tech stack to optimize what I do as well. For all of the suckiness with deepfaked videos, AI slop and outright creative/intellectual property theft, we've kinda crossed the rubicon and have to keep moving forward. Then there's the fact that the entire US economy is perched upon a rather shaky house of AI cards....but, uh....moving on quickly!
Getting back to music, there are smart use cases for it enhancing Stadium. If Showcase will examine, flag and separate stems, it's only a matter of time before it EQ matches those stems, allowing you to recreate virtually any guitar tone you hear online. It should be able to compensate for tunings and pickups as well.
I can see an AI-natural language chatbot allowing you to parse the Customtone Cloud or whatever all of the wealth of free presets and Proxy captures will be called. LLMs are already great data synthesizers, so this makes perfect sense. When that happens, Stadium with Showcase will indeed become that mythical roadie that DI mentioned in the June 11 launch.