The Official Original Artificial Intelligence We're All F***ing Doomed Thread

It opens up interesting questions about humanity and communication though. LLMs start to be at a level where they are quite convincing, and hard to discern from someone who writes succintly online. To the point that some people will think they are seeing AI answers if someone posts with bullet points or bolded text, even though the AI is just trained on all the literature written by humans.

Here in Finland there's a meme about a Russian propaganda bot/troll farmer on Twitter posting "NATO can't save Finland" translated to "NATO ei voi tallentaa Suomea", which is a literal translation...except it means "NATO can't save Finland to a <file/disk/hard drive>" in Finnish. :ROFLMAO:

With LLMs, this sort of influencing will become a lot more convincing.
It's a great tool for making garbage, some of which happens to fool people, no question. It's poised to be the liar's best friend, whether that liar is a scammer or a state. The bubble will pop horrifically, but the lying function will continue on past it.
 
Yes, it's a massive bubble.

Also, most of what is hyped as 'AI' (specifically generative code extruding text and images) is not and can never be the "takes over the world" AGI/superhuman intelligence. Nobody has any idea how to generate such a piece of software, if indeed one is possible at all. Many other things that are called 'AI' are just code and/or automation, no different than code that came before it.

We're in the midst of yet another hype cycle. There are a few uses for some aspects of generative code. Bubble con men are prone to point to certain interesting research applications of machine learning (ML) code, trying to make their scammy offerings seem better by association with them. Any such association is fabricated.

A Large Language Model is just a probabilistic text generator. People ascribe intelligence to it due to a bug in human cognition that assumes anything that generates text possesses intellect. Way back in the 1960s, people who should have known better believed a piece of code called ELIZA that answered text with text had developed consciousness.
I think you’re underestimating the feasibility of AGI/ASI. The theoretical mechanisms are already proven and applied (in current narrow AI tech); it’s just a matter of extrapolation and scaling. And if either of those seem too daunting for human developers, no matter - current AI tech will be used to develop (not so) future AGI tech.

That, and enormous financial interests, make all of this nearly inevitable.

Meanwhile, so much for our climate goals…
 
I think you’re underestimating the feasibility of AGI/ASI. The theoretical mechanisms are already proven and applied (in current narrow AI tech); it’s just a matter of extrapolation and scaling.
Not true. A transformer-based architecture LLM (which is what they are scaling) exhibits zero capacity for AGI at any scale. In fact, thus far scaling has yielded, much, much less improvement in outputs (especially in terms of error rate relative to extruding text that actually represents accurate things) than they expected.

And if either of those seem too daunting for human developers, no matter - current AI tech will be used to develop (not so) future AGI tech.
That's not a thing (an LLM that can out-code a top flight - or even a garden variety - programmer). A large language model has no insight into anything. All it can do is probabilistically generate tokens that it converts to words in response to tokens matched from words that are input in a particular order. It can only do it based on code that has already been written and fed into it to weight the tokens in its model. Every major vendor's AGI visions are hand-waving and fantasy novel writing. They might as well tell you that magic wands are just a few years down the road, because they haven't any idea how to make a magic wand (if in fact such a thing is even possible), either.

I've done security audits of code generated by professionals using the most sophisticated LLMs. I've used every major coding assistant currently offered. I've talked with leading 'AI' security researchers about the vector math behind the scenes. It's not what you're describing by several long leaps.

They don't even know what they're trying to solve for, because they don't know what AGI even *is*. They're just taking a stochastic parrot (as Dr. Emily Bender - who understands the architecture at a code level - puts it) and hyping it up.

That, and enormous financial interests, make all of this nearly inevitable.
Oh, there are enormous financial interests, alright. It's a bubble. Watch.

Might someone some day actually come up with some sort of software that is capable of becoming a general intelligence, an independent mind capable of reasoning, ideation, and all the rest? Nobody knows. But not one of the current 'AI' players has a single solitary clue how they would make that yet. An LLM will not scale into an AGI.

There would be enormous financial interest in alchemy, too. But it's still not a thing. And unlike AGI, that's a concrete goal with a known definition and at least theoretical process that can be detailed with existing knowledge.

Meanwhile, so much for our climate goals…
There we're agreed.

There are plenty of reasons to be worried about what that industry is doing right now. The notion that while stepping on their own dicks left and right and burning tens of billions making and serving (with massively destructive data centers that are jacking up power bills) software that has no currently viable path to profitability, they'll by pure chance (because again, they haven't even a theory of a technique that has any remote chance of working) create AGI that will threaten humanity, is not one of them.
 
Last edited:
Where are the big AI fans from the other thread? Curious what their thoughts are, after viewing some of the stuff posted here.
I might be considered one of those. Hadn't said anything yet because I'm having the biggest hangover of the year ✌️

Anyway, I'm not a big AI fan, just a person who thinks AI is something impossible to stop from being part of our lives in the next few years, just like internet, smartphones or social media before.

While being aware of the cons of the technology, I'm like "so what?". Being aware of every new tech cons has proven pointless everytime, so I'm more leaned towards accepting what I don't have control of, and trying to think about the good things it will bring.

It's not like current society is perfect and beautiful and AI will end it. We're pretty much already screwed up at so many levels, so I don't think it's realistic trying to preserve it with a fight that is already lost.

On the other hand, in the Helix Stadium thread AI offtopic I only talked about what I think would be 2 good things made with AI: being able to create Helix presets from a prompt (to save you dozens of clicks and just having to do the fine tuning; that is, a time saving feature), and having an interactive customizable virtual jam band to play at home. And those ideas were treated like the beginning of the end of mankind 🤣

To me the biggest risk of AI is by being used by people who don't have the domain knowledge about the subject they are asking about.

I'm a programmer. So far I can't use AI in my work because I'm working on a project that deals with sensitive data, so I'm a bit behind the times in this regard.

But I have read so many reports of people using, and trusting the work generated by AI. Like programmers, who cannot explain what the code churned out by AI does, but will confidently try to push it to a shared repository. Which would mean they are not only bad at programming, but they are an unnecessary cog in the machine who could be replaced by a middle manager, or even the CEO.
If you can't analyze and understand the code provided by AI, you shouldn't be doing your job. Period.

But AI as a booster is an amazing tool. If I already know what I need to do, but I don't want to type every single character of every single line of code just because it's faster to ask AI -> check the provided code -> test it -> move on to the next thing, what's the problem?

Like AI code autocompletion. That's a f€&#@ng bliss. Why not embrace a tool that lets you deliver your work faster? A different thing is not knowing what you have to do and just pasting the code. I don't think that's the majority of programmers. But saying "a true developer doesn't use AI" is just ignorant and kinda embarrassing, especially from a person to whom technology is a natural thing.
 
But saying "a true developer doesn't use AI" is just ignorant and kinda embarrassing, especially from a person to whom technology is a natural thing.
I never said such a thing. I said that if you are basically acting as a prompt typer, without any understanding of the end result, then you are easily replaceable.
 
Anyway, I'm not a big AI fan, just a person who thinks AI is something impossible to stop from being part of our lives in the next few years, just like internet, smartphones or social media before.
I think this thought process, that it's no different than smart phones, the Internet, or even social media, is a bit ignorant. Or shall I say short-sighted, since "ignorant" tends to have a negative connotation. The difference is that while the Internet has had an effect on job displacement to an extent, it's not nearly the same in terms of speed at which it displaced jobs. If what is talked about in some of the videos previously posted comes to fruition, we could be seeing job loss on a scale far greater than any other time in history.

I tend to think of smartphones, Internet, and social media as connecting people, and AI (especially AGI) replacing people.

EDIT

Is it going to continue to advance, whether I like it or not? Sure, I don't deny that. Not much I can do, obviously. I honestly just hope concerns (mine included) about AGI in particular are wrong and/or overblown, and we can continue riding the wave.
 
Last edited:
I think this thought process, that it's no different than smart phones, the Internet, or even social media, is a bit ignorant. Or shall I say short-sighted, since "ignorant" tends to have a negative connotation. The difference is that while the Internet has had an effect on job displacement to an extent, it's not nearly the same in terms of speed at which it displaced jobs. If what is talked about in some of the videos previously posted comes to fruition, we could be seeing job loss on a scale far greater than any other time in history.

I tend to think of smartphones, Internet, and social media as connecting people, and AI (especially AGI) replacing people.

EDIT

Is it going to continue to advance, whether I like it or not? Sure, I don't deny that. Not much I can do, obviously. I honestly just hope concerns (mine included) about AGI in particular are wrong and/or overblown, and we can continue riding the wave.
IMO, most (though not all) of the job displacement of late was going to happen anyway. They're using 'AI' as the excuse, because things have been softening in general. The big tech layoffs are absolutely not because those people were replaced by LLMs. To the contrary, those big tech firms have been pouring tens of billions of dollars into LLM implementations that have not yielded much in the way of revenue. They're laying people off in order to double down on the losing bet without it looking as bad on the quarterly statement. People losing their jobs by no failing of their own is absolute catnip to the stock market.
 
IMO, most (though not all) of the job displacement of late was going to happen anyway. They're using 'AI' as the excuse, because things have been softening in general. The big tech layoffs are absolutely not because those people were replaced by LLMs. To the contrary, those big tech firms have been pouring tens of billions of dollars into LLM implementations that have not yielded much in the way of revenue. They're laying people off in order to double down on the losing bet without it looking as bad on the quarterly statement. People losing their jobs by no failing of their own is absolute catnip to the stock market.
Yeah, I can see that. I mean, I in no way think LLM's are what is to "fear", even as they get more advanced.
 
IMO, most (though not all) of the job displacement of late was going to happen anyway. They're using 'AI' as the excuse, because things have been softening in general. The big tech layoffs are absolutely not because those people were replaced by LLMs. To the contrary, those big tech firms have been pouring tens of billions of dollars into LLM implementations that have not yielded much in the way of revenue. They're laying people off in order to double down on the losing bet without it looking as bad on the quarterly statement. People losing their jobs by no failing of their own is absolute catnip to the stock market.
mood GIF


Also compounding is that "Free money" has dried up which sustained us in Big Tech for the last decade + crazy pandemic era over-hiring. It was a good market for us employees for about 6 months during that time.
 
Last edited:
Not true. A transformer-based architecture LLM (which is what they are scaling) exhibits zero capacity for AGI at any scale. In fact, thus far scaling has yielded, much, much less improvement in outputs (especially in terms of error rate relative to extruding text that actually represents accurate things) than they expected.
I can surmise from this that you know more about the current state of the art than I do, and I'm happy to defer to you. Like any sensible pessimist, I'm happiest when I'm wrong. :D That said, I still think we're talking past one another a bit - with you focusing on the present and near term; and me looking at concerns some (admittedly unknown) number of years down the road.

That's not a thing (an LLM that can out-code a top flight - or even a garden variety - programmer). A large language model has no insight into anything.
Of course not. Everyone is aware that these models function in a purely statistical manner, and have no real "insights". It's Borel's infinite monkeys typing Shakespeare, essentially. That's neither here nor there, except that in many ways it makes matters worse: cost, waste, and the difficulty in refining out unwanted behaviors as more conventional software development allows.

There we're agreed.

There are plenty of reasons to be worried about what that industry is doing right now. The notion that while stepping on their own dicks left and right and burning tens of billions making and serving (with massively destructive data centers that are jacking up power bills) software that has no currently viable path to profitability, they'll by pure chance (because again, they haven't even a theory of a technique that has any remote chance of working) create AGI that will threaten humanity, is not one of them.
I think we agree more than we disagree. The bottom line is that we don't need to succeed in creating AGI in order to be damned by the pursuit. A handful of billionaires hell-bent on being first to the finish line is enough accelerate unemployment, wealth disparity, and climate change. Current narrow AIs (as secretly stupid as they may be) are still plenty dangerous when coupled with agents with connectivity to critical infrastructure. (The Internet of Things' jacked up evil sibling.)

My point is, we don't have to make bold assumptions about the viability of AGI (acknowledging that I tend to do just that LOL) to be very concerned about this technology.
 
I can surmise from this that you know more about the current state of the art than I do, and I'm happy to defer to you. Like any sensible pessimist, I'm happiest when I'm wrong. :D That said, I still think we're talking past one another a bit - with you focusing on the present and near term; and me looking at concerns some (admittedly unknown) number of years down the road.
I was focusing on the invocation of AGI and the notion that it is under any scenario what they're scaling towards. It isn't, because they don't even know what they would need to build in order to get there. It's like implying that they're scaling to build Hogwarts, a wardrobe that is a portal to Narnia, a warp drive, artificial gravity generators, or 'beam me up' transporters. "In a few years."

Of course not. Everyone is aware that these models function in a purely statistical manner, and have no real "insights". It's Borel's infinite monkeys typing Shakespeare, essentially. That's neither here nor there, except that in many ways it makes matters worse: cost, waste, and the difficulty in refining out unwanted behaviors as more conventional software development allows.
It also points to the reality that AGI is a fantasy topic right now. It's the Monorail con from the Simpsons, but with the entire rich world economy and staggering amounts of resources being burned up by the con. LLMs cannot scale to AGI any more than Magic 8 Balls can.

I think we agree more than we disagree. The bottom line is that we don't need to succeed in creating AGI in order to be damned by the pursuit. A handful of billionaires hell-bent on being first to the finish line is enough accelerate unemployment, wealth disparity, and climate change. Current narrow AIs (as secretly stupid as they may be) are still plenty dangerous when coupled with agents with connectivity to critical infrastructure. (The Internet of Things' jacked up evil sibling.)
Yes, those people are the problem. LLMs are just the snake oil that they're selling.

My point is, we don't have to make bold assumptions about the viability of AGI (acknowledging that I tend to do just that LOL) to be very concerned about this technology.
I'm more concerned about the companies currently pushing this stuff, and about the people at the top of them, than I am about any of the technologies themselves. They are rationalizing massive overspend on a product that is both not especially viable commercially and massively wasteful. It will end very badly.
 
I might be wrong, but the same as we see companies firing people because they can reach the same goals with some less people with AI boosting then, I think many new companies will be able to start with fewer people than currently.

And if many jobs disappear anyway, it's not the first time in history this happens. Many people would have to recycle, but I don't think this is the start of a global poverty era.
 
And if many jobs disappear anyway, it's not the first time in history this happens. Many people would have to recycle, but I don't think this is the start of a global poverty era.
I always felt that technological advancement that kills jobs isn't necessarily a bad thing, as long as the populace keeps pace, and continues educating themselves. IE, when tractors came along, if you were a farmhand, you'd be fine, if you learned how to work on tractors and such.

But as I see it, the numbers of those jobs never align with population growth. Not to mention, education of the general masses sure seems to be going in the opposite direction. Shit, cashiers can't even do basic math!
 
Armies of robots seems like it's too expensive. Armies of cheap drones? More likely.

Expense is an afterthought for the world's billionaires. Never forget how many years Amazon was unable to turn a profit, yet Jeff Bezos was never removed and people kept putting money into his company till all the mom and pop shops waging war with Amazon were forced to quit.

I would not doubt our industrialists' interest in protecting themselves beyond entrusting their security to drones. Heck, just think of the Elon Musk of 2100 with robots flipping his burgers, shining his shoes and clearing mobs of people who thought they would take him out.

Nation states also theoretically have unlimited wealth. India? Pakistan? Russia? China?

Ditch the dollar and it happens.

Digressing a bit, I think the success of any artificial intelligence will hinge on compute resources. So expansion and improvement of this capacity will be said AI's overriding ambiition.

Have you heard of dark factories in China? Now, think of a horde of microscopic robots, nanometers in size, which build more of themselves, perhaps using an additive manufacturing process.

As this is going on, once the number of these robots hits a critically viable threshold, they start to produce slightly bigger machines to produce even larger machines. And this goes on night and day, until the point we have machines that are man-size or even larger (or smaller) and equipped with weaponry. And then the leviathans, the Star Destroyers, the Death Stars.

There are also robots producing bullets and advanced weaponry, acquiring resources required for production, "eating up people and their farmland in the interest of expansion", and of course, producing more and more of themselves.

Might be too futuristic though, I see this planet turning on each other in the near future and blowing ourselves back to the stone age bwahaha
 
Back
Top