I think you’re underestimating the feasibility of AGI/ASI. The theoretical mechanisms are already proven and applied (in current narrow AI tech); it’s just a matter of extrapolation and scaling.
Not true. A transformer-based architecture LLM (which is what they are scaling) exhibits zero capacity for AGI at any scale. In fact, thus far scaling has yielded, much, much less improvement in outputs (especially in terms of error rate relative to extruding text that actually represents accurate things) than they expected.
And if either of those seem too daunting for human developers, no matter - current AI tech will be used to develop (not so) future AGI tech.
That's not a thing (an LLM that can out-code a top flight - or even a garden variety - programmer). A large language model has no insight into anything. All it can do is probabilistically generate tokens that it converts to words in response to tokens matched from words that are input in a particular order. It can only do it based on code that has already been written and fed into it to weight the tokens in its model. Every major vendor's AGI visions are hand-waving and fantasy novel writing. They might as well tell you that magic wands are just a few years down the road, because they haven't any idea how to make a magic wand (if in fact such a thing is even possible), either.
I've done security audits of code generated by professionals using the most sophisticated LLMs. I've used every major coding assistant currently offered. I've talked with leading 'AI' security researchers about the vector math behind the scenes. It's not what you're describing by several long leaps.
They don't even know what they're trying to solve for, because they don't know what AGI even *is*. They're just taking a stochastic parrot (as Dr. Emily Bender - who understands the architecture at a code level - puts it) and hyping it up.
That, and enormous financial interests, make all of this nearly inevitable.
Oh, there are enormous financial interests, alright. It's a bubble. Watch.
Might someone some day actually come up with some sort of software that is capable of becoming a general intelligence, an independent mind capable of reasoning, ideation, and all the rest? Nobody knows. But not one of the current 'AI' players
has a single solitary clue how they would make that yet. An LLM will not scale into an AGI.
There would be enormous financial interest in alchemy, too. But it's still not a thing. And unlike AGI, that's a concrete goal with a known definition and at least theoretical process that can be detailed with existing knowledge.
Meanwhile, so much for our climate goals…
There we're agreed.
There are plenty of reasons to be worried about what that industry is doing right now. The notion that while stepping on their own dicks left and right and burning tens of billions making and serving (with massively destructive data centers that are jacking up power bills) software that has no currently viable path to profitability, they'll by pure chance (because again, they haven't
even a theory of a technique that has any remote chance of working) create AGI that will threaten humanity, is not one of them.