Hell Freezes Over 2026 Edition

Sorta on-topic....AI tells me:
  • Neural DSP Quad Cortex: This floorboard modeler includes an internal power supply with an IEC inlet, allowing direct connection to a wall outlet without needing a separate power brick.
View attachment 57959

:ROFLMAO: AI hallucinations are so much fun. :ROFLMAO:
One of the most annoying and scariest things about AI is how confidently it can give false information, and double down when challenged, refusing to budge even when presented with facts that contradict its claims. If AI is ever given any type of legal authority we are totally fucked.
 
The more I read and watch videos of the latest releases, the more the current complete offering of hardware, software and cloud integration looks like something I can totally live with.

Damn this thing looks like it really crushes now. I’m getting pretty stoked to start capturing my amps.
I had one when the first came out, sold it after a few months. Revisited it again in 2023, sold. Picked one up again out of curiosity before the V2 update. Can’t see it going anywhere now. It’s finally what it should have been, only took 5 years..
 
I had one when the first came out, sold it after a few months. Revisited it again in 2023, sold. Picked one up again out of curiosity before the V2 update. Can’t see it going anywhere now. It’s finally what it should have been, only took 5 years..
I understand QC 1.0 was a disappointment, but I was already in love with mine by 2023. What flipped it for you in the latest update - the V2 captures?
 
  • Like
Reactions: Dez
Sorta on-topic....AI tells me:
  • Neural DSP Quad Cortex: This floorboard modeler includes an internal power supply with an IEC inlet, allowing direct connection to a wall outlet without needing a separate power brick.
View attachment 57959

:ROFLMAO: AI hallucinations are so much fun. :ROFLMAO:
Even the term 'hallucination' is wildly flattering and inaccurate industry talk. The model just produced a bullshit answer, because it can only ever (in effect) guess, in all cases, every time. It's a probabilistic text extruder.
 
Last edited:
One of the most annoying and scariest things about AI is how confidently it can give false information, and double down when challenged, refusing to budge even when presented with facts that contradict its claims. If AI is ever given any type of legal authority we are totally fucked.
Only someone who has no idea what they're talking about regarding its capabilities would propose that it have any legal authority...but that's almost everyone, so yeah, that could happen and would be bad.
 
Long time friend of mine and great player just decided on the QC as his digital device. He's going to capture his amp with it, among other things.

I don't personally care for Gibsons, huge amps, or captures. I'm close friends with a guy (monster player) who's been playing a Les Paul Custom for over 30 years, a Road King for over 20 years, and is now buying a QC. Gear choice really need not be tribal.
 
Only someone who has no idea what they're talking about regarding its capabilities would propose that it have any legal authority...but that's almost everyone, so yeah, that could happen and would be bad.
I saw some police bodycam footage recently that was infuriating. Basically a casino had AI facial recognition that told their security that a customer in the casino was someone who was banned for life for whatever reason. The cop came, took the guy into the security office for questioning, and the guy was immediately able to provide proof that he was not the banned person, but just someone who looked similar, with a completely different name. The cop didn’t believe him even though he provided multiple forms of ID, and insisted on taking the guy to the police station for questioning. He kept saying ‘I don’t believe him, this technology they have is pretty cool and the computer says it’s the same guy’. At the end of the video the narrator (no doubt AI lol) said the guy was still being charged with trespassing and was engaged in an ongoing legal battle to prove his identity and get the charges dropped. So yeah you will have idiots that think technology that is “pretty cool” is infallible. Shit is about to get really bad. We’re already almost to the point that AI videos are indistinguishable from real ones, imagine how easy it will be to frame somebody.
 
I saw some police bodycam footage recently that was infuriating. Basically a casino had AI facial recognition that told their security that a customer in the casino was someone who was banned for life for whatever reason. The cop came, took the guy into the security office for questioning, and the guy was immediately able to provide proof that he was not the banned person, but just someone who looked similar, with a completely different name. The cop didn’t believe him even though he provided multiple forms of ID, and insisted on taking the guy to the police station for questioning. He kept saying ‘I don’t believe him, this technology they have is pretty cool and the computer says it’s the same guy’. At the end of the video the narrator (no doubt AI lol) said the guy was still being charged with trespassing and was engaged in an ongoing legal battle to prove his identity and get the charges dropped. So yeah you will have idiots that think technology that is “pretty cool” is infallible. Shit is about to get really bad.
Yep. And there are even worse examples that I cannot mention because it would be a violation. All in all, there are a whole lot of under-informed people placing a whole lot of baseless faith in deeply flawed, not-even-entirely-fixable technology because so many of us are conditioned to basicalyl believe whatever is told to us by a person, so long as they're rich enough.

We’re already almost to the point that AI videos are indistinguishable from real ones, imagine how easy it will be to frame somebody.
We're really not. People who knwo what they're doing can pick them out almost instantaneously, and they're not actually getting much better with any speed. The real risk is the stupid people believing in this stuff, as you describe above.
 
Last edited:
School AI surveillance is out of control. Right now. One high school kid got weapons pointed at him, and then handcuffed and searched because AI thought a bag of Doritos was something that rhymes with bun. Another middle school got locked down because their AI mistook a clarinet case for that sort of item.

We're already in very dangerous territory, and it will only get worse as more of these systems come on line.
 
Yep. And there are even worse examples that I cannot mention because it would be a violation. All in all, there are a whole lot of under-informed people placing a whole lot of baseless faith in deeply flawed, not-even-entirely-fixable technology because so many of us are conditioned to basicalyl believe whatever is told to us by a person, so long as they're rich enough.


We're really not. People who knwo what they're doing can pick them out almost instantaneously, and they're not actually getting much better with any speed. The real risk is the stupid people believing in this stuff, as you describe above.
Key word almost. Yeah it’s still obvious when a video is AI. But just a couple years ago the best you could get was will smith eating spaghetti. We’re probably only a few years away from AI videos that could fool most people. We’re probably pretty close now, if you have the right combination of AI and human input directing/instructing it to smooth over the obvious signs.
 
Last edited:
Key word almost. Yeah it’s still obvious when a video is AI. But just a couple years ago the best you could get was will smith eating spaghetti. We’re probably only a few years away from AI videos that could fool most people. We’re probably pretty close now, if you have the right combination of AI and human input directing/instructing it to smooth over the obvious signs.
I'm highly doubtful. For technical reasons, the models are not getting leaps and bounds better any more, and very likely can't.
 
I'm highly doubtful. For technical reasons, the models are not getting leaps and bounds better any more, and very likely can't.
Well AI is pretty optimistic about it :rofl
Current top models like OpenAI’s Sora 2, Google’s Veo 3 (and its iterations), Kling AI, and others routinely produce short clips (up to 15–60 seconds in many cases) that are indistinguishable from real footage for the average viewer, especially at typical social media resolutions and viewing conditions. Experts and researchers describe 2025–2026 as the period where deepfakes and synthetic video crossed the “indistinguishable threshold” for non-experts in everyday scenarios—low-res video calls, TikTok/Instagram clips, news-style snippets, UGC-style content, and even some cinematic shots.


Key realities right now:


• Photorealistic humans with natural expressions, lip-sync, subtle micro-movements, and physics-aware motion (buoyancy, rigidity, gravity, fluid dynamics) are standard in the best outputs.


• Common failure modes that used to scream “fake” (hand morphing, eye glitches, unnatural blinking, inconsistent lighting across frames) have been largely solved in leading models.


• In controlled or quick-cut scenes, even experts struggle to spot artifacts without forensic tools or frame-by-frame scrutiny.


• Real-time synthesis is emerging, meaning live deepfake video calls or interactive avatars that react naturally are rolling out or imminent in 2026.


Where it’s not fully indistinguishable yet:


• Long-form content (full movies or 10+ minute coherent narratives) still shows inconsistencies over time—character identity drift, lighting continuity breaks, or physics accumulating small errors.


• Extreme close-ups on faces in dramatic/emotional scenes can still trigger the uncanny valley for attentive viewers, though this gap is shrinking fast.


• Highly chaotic or novel physics scenarios (extreme sports with unpredictable crowd interactions, very long single-take shots) remain challenging but are improving monthly.


Timeline consensus from recent sources:


• Short-to-medium clips (seconds to ~1 minute): Already indistinguishable in many contexts as of late 2025/early 2026.


• Hour-long coherent video indistinguishable for most purposes: Predictions cluster around 2026 itself or early 2027, driven by “world models” that simulate physics and continuity more holistically.


• Full cinematic feature films with synthetic actors: Likely 2027–2030 for broad public indistinguishability, though Hollywood-level production could fake it earlier in controlled ways.


The line has effectively blurred. Most people scrolling feeds in 2026 are already consuming synthetic video without realizing it, and detection now relies more on metadata, provenance checks, or specialized forensics than naked-eye inspection. The capability exists today at high-end levels; it’s mostly about length, cost, accessibility, and edge-case polish before it’s trivially universal.
 
The IQ equivalent of leading edge AI tools went from about 40 to 170 in a year. They will continue to improve, maybe at a slower pace, but they will keep getting better and better.
 
I've been using computers since 1987 (IBM PC-JX at home and McIntosh computers at school), and like to think I'm pretty tech savvy. Look at this.



The voices are all wrong, but look at the people they spoofed. I only know it's fake because both Freddy and Ozzy are dead. But that's some next level sh*te.

Also, please be aware, we are not the guys on the bleeding edge of tech with the ChatGPT and Gemini stuff they feed us. The real good stuff is in the hands of the government and the billionaires.

And though I know some people may take offence, the pedos, the rapists, the gangbangers, the terrorists, the murderers, etc, will also be the first to handed the "real good stuff".

At least that's what happened on my phone, my computer and other digital devices that connected to the internet back when spoofing voices was all the Shiteratti could manage. Now I'm nothing and nobody, destined to die alone haha

What I mean is, this stuff isn't being dished out for egalitarian purposes. They want you to think you can compete, but denial of technology is something that has been done throughout history and we are the unwashed masses.
 
Last edited:
I understand QC 1.0 was a disappointment, but I was already in love with mine by 2023. What flipped it for you in the latest update - the V2 captures?
Just a lot of smaller things adding up. New spring reverb that actually sounds good, same with newer effects, folders and organizing trees that I can manage myself, V2 captures, finally some plugins making it onto the thing.
 
This thing is SO DAMN GOOD!

Being able to load your own IR and set the High and Low pass while creating the Capture is awesome. Because it's the same IR and settings I'm going to use in my Preset.

2aSctq9.png



Auto switch compare feature, greatly appreciated.

e7bUK5b.png



Undo and Redo, be still my heart. This has already saved my ass a bunch of misery, seems to work on every setting.

uXzacjZ.png



The Editor is so easy to use. Wow, I'm seriously impressed and totally happy.

Honeymoon phase in full effect but right now. It's time for some Niagara Falls, heart shaped coin-op bed, mirrored ceiling love time with this baby! (too much?) :rofl

SJx0bUU.png
 
Back
Top