The Official Original Artificial Intelligence We're All F***ing Doomed Thread

mbenigni

Rock Star
TGF Recording Artist
Messages
7,121
This is a place to share your thoughts - positive, negative, indifferent - about the near, medium, and long-term implications of Artificial Intelligence.

Mainly so those poor folks in the Helix Stadium thread can get back to bickering about the release date.

OK, just to get us started: I haven't watched this specific video yet, but this guy knows approximately 1,000x more about AI than all the people telling you everything will be fine:

 
Last edited:
did anyone see the russian AI robot faceplant yesterday? then that guy tries to pull the curtain back and totally fails. the 2 guys behind the robot try to pick it up but it just looks like 2am outside a college bar hahahahahahahahahahah

its probably the funniest vid i have seen in a bit.


THIS HAS BEEN A SUCCESSFUL UNVEILING!!!!!!!
 
Positives:
1. Fun to make AI-generated memes.
2. Quick way to see if there's a way to do something in OSS and/or Windows, if I can't find it on Reddit or Stack Exchange
3. Re-wording manufacturer descriptions quickly, to use for ad copy
4. LLM's built into Google (Gemini) or DuckDuckGo (DuckAi) do very much improve search results, which is quite helpful. Meta's is, at this point, pointless to me.

Negatives:
1. Power requirements can be really bad, and thus bad for the environment (though there are solutions being implemented)
2. Previously mentioned effects on art, movies, music, television, etc. by @Sascha Franck, which I tend to agree with, however deem unstoppable at this point.
3. The big one that actually does make me nervous is the race for AGI, or Artificial General Intelligence, and by extension "Super Intelligence", where it's both more general in expertise as well as being smarter than humans. This prompted many people to come out completely against it, or at least call for the slowing of development.





Interesting debate:

 
Last edited:
arnold schwarzenegger smile GIF
 
Positives:
1. Fun to make AI-generated memes.
2. Quick way to see if there's a way to do something in OSS and/or Windows, if I can't find it on Reddit or Stack Exchange
3. Re-wording manufacturer descriptions quickly, to use for ad copy
4. LLM's built into Google (Gemini) or DuckDuckGo (DuckAi) do very much improve search results, which is quite helpful. Meta's is, at this point, pointless to me.

Negatives:
1. Power requirements can be really bad, and thus bad for the environment (though there are solutions being implemented)
2. Previously mentioned effects on art, movies, music, television, etc. by @Sascha Franck, which I tend to agree with, however deem unstoppable at this point.
3. The big one that actually does make me nervous is the race for AGI, or Artificial General Intelligence, and by extension "Super Intelligence", where it's both more general in expertise as well as being smarter than humans. This prompted many people to come out completely against it, or at least call for the slowing of development.
Your Negative #3 is where my head is at lately. That and the mass unemployment that will likely precede it by a couple of years. The rest (positive and negative) are already in effect, and they're just details by comparison.

One negative I'd add is how terribly effective AI has been and will be for targeted disinformation campaigns.
 
Negatives:
1. Power requirements can be really bad, and thus bad for the environment (though there are solutions being implemented)
2. Previously mentioned effects on art, movies, music, television, etc. by @Sascha Franck, which I tend to agree with, however deem unstoppable at this point.
3. The big one that actually does make me nervous is the race for AGI, or Artificial General Intelligence, and by extension "Super Intelligence", where it's both more general in expertise as well as being smarter than humans. This prompted many people to come out completely against it, or at least call for the slowing of development.
All very good points.

I'll add the GIGO & "carbon copy" degradation traps, where unreliable Internet "data" is used to train AI's that are then used to train new AI models, compounding hallucinations and misinformation.

Then there is the unbalanced power dynamic where AI's models are purposely aligned to "ideologies" and potentially distorted data sets intended to benefit their creators/sponsors to the detriment of everyone else. Power and greed lead to corrupt intent.

F it, we're all doomed.

1000003639.jpg
 
OK, just to get us started: I haven't watched this specific video yet, but this guy knows approximately 1,000x more about AI than all the people telling you everything will be fine:

Watched a video of Hinton on the subject a while ago already. This guy very defenitely knows what he's talking about, so when he's worried, it's defenitely time to be worried.
 
Negatives:
1. Power requirements can be really bad, and thus bad for the environment (though there are solutions being implemented)
2. Previously mentioned effects on art, movies, music, television, etc. by @Sascha Franck, which I tend to agree with, however deem unstoppable at this point.
3. The big one that actually does make me nervous is the race for AGI, or Artificial General Intelligence, and by extension "Super Intelligence", where it's both more general in expertise as well as being smarter than humans. This prompted many people to come out completely against it, or at least call for the slowing of development.

4. AI being consultated to have a say in geopolitical and economical decisions. In fact, that's what I'm possibly scared about the most.
With points 4a-z being things such as AIs controlling weapons (already happening), commanding military operations (perhaps partially happening already) and what not.

Let's face it, AIs are extremely good in tons of strategic things. There's a reason that humans have zero chances to ever win a game of Chess or Go against an AI again. Especially with Go, that's quite scary (unlike with Chess, there's not exactly a library that can be studied).
And well, things such as the stock market but also international trades and, well, wars 'n' stuff are based on strategic decisions.
Suno completely destroying music production will be a nice summer breeze compared to the things we might (or almost certainly: will) see in these areas.
 
I can give myself a good bout of anxiety thinking about it for too long. I really didn’t think I’d be alive when technology hit the point of scaring me more than it impresses me.

On one hand, it’s been a game changer for me at work as I’ve used it for ideas all over the place, from fixing stuff to selling 4000 gallons of diesel to having the information about the installation and use of an MRI machine to quell a very large tenant’s concerns about noise and interference, in that direction it’s fucking great.

But I don’t believe the positives outweigh the negatives in the long run. Just this one thought is fucking crazy mind blowing to me- in a short amount of time we will not know what is real or what is not unless we see it with our own fucking eyes, physically.
And even then there’s reason to question it.

My dad used to tell me, “Believe none of what you hear and half of what you see”, but I believe that’s about to get updated.
 
I really didn’t think I’d be alive when technology hit the point of scaring me more than it impresses me.

Same here. It was like "fuck it, Skynet is only happening in movies or in the far distant future". And here we are, almost there.

But I don’t believe the positives outweigh the negatives in the long run.

This is the very issue I'm having with all things AI.

Yes, there's truckloads of things where AI driven research and what not could be incredibly helpful (and in fact; it already is).
But all that isn't in the interest of the powers that be. I really hope it won't trigger a political discussion (there's pretty much no need for it), but I guess we can all agree that most of the "owners" of whatever AIs don't see public interest/welfare as a primary thing of interest. In the first place, they want their AIs to generate obscene amounts of money going into their pockets. And in case it's not directly about money, it's about power.

When you think about it, with all those ridiculous amounts of money in the hands of pretty little folks, you could solve hunger, poverty, diseases and whatever other misfortunes in, well, days - if only you'd allowed AI to take care about the logistics. But instead, AIs are used for the polar opposite. You can take a bet that there's massive AI impact happening in, say, stock market predictions already, Or in any other kinda big financial transaction stuff. As said, all that strategic stuff is where AI really has it. And you can as well take a bet that the best algorithms aren't available to Joe Average.

Fwiw, same goes for the military complex. Just as with any communication technique, these guys will have their hands on way more advanced AI incarnations than what is available to the public. And it's certainly not used for good - regardless of which side you're on.

Personally, I'm really, really scared about the future of my kids. Had I known that this was about to happen 10-15 years ago, I possibly would've thought different about having kids (and I'm saying that as someone loving his kids more than anything else).
 
Same here. It was like "fuck it, Skynet is only happening in movies or in the far distant future". And here we are, almost there.



This is the very issue I'm having with all things AI.

Yes, there's truckloads of things where AI driven research and what not could be incredibly helpful (and in fact; it already is).
But all that isn't in the interest of the powers that be. I really hope it won't trigger a political discussion (there's pretty much no need for it), but I guess we can all agree that most of the "owners" of whatever AIs don't see public interest/welfare as a primary thing of interest. In the first place, they want their AIs to generate obscene amounts of money going into their pockets. And in case it's not directly about money, it's about power.

When you think about it, with all those ridiculous amounts of money in the hands of pretty little folks, you could solve hunger, poverty, diseases and whatever other misfortunes in, well, days - if only you'd allowed AI to take care about the logistics. But instead, AIs are used for the polar opposite. You can take a bet that there's massive AI impact happening in, say, stock market predictions already, Or in any other kinda big financial transaction stuff. As said, all that strategic stuff is where AI really has it. And you can as well take a bet that the best algorithms aren't available to Joe Average.

Fwiw, same goes for the military complex. Just as with any communication technique, these guys will have their hands on way more advanced AI incarnations than what is available to the public. And it's certainly not used for good - regardless of which side you're on.

Personally, I'm really, really scared about the future of my kids. Had I known that this was about to happen 10-15 years ago, I possibly would've thought different about having kids (and I'm saying that as someone loving his kids more than anything else).

Absolutely! To think I used to just worry about the world I’m leaving my kids, due to the environment. Now there’s still that, and this AI shit.
 
Absolutely! To think I used to just worry about the world I’m leaving my kids, due to the environment. Now there’s still that, and this AI shit.

Yeah - and come to think of it: I'd bet AIs could solve pretty much all environmental issues more or less in a heartbeat, too. But instead it almost seems like a race between the two to erase human life as quickly as possible.
 
Yeah - and come to think of it: I'd bet AIs could solve pretty much all environmental issues more or less in a heartbeat, too. But instead it almost seems like a race between the two to erase human life as quickly as possible.
Well sure; a purely logical AI, not influenced by human agendas (I know, easier said than done) would say we need to curb a lot of things, and/or people.
 
All very good points.

I'll add the GIGO & "carbon copy" degradation traps, where unreliable Internet "data" is used to train AI's that are then used to train new AI models, compounding hallucinations and misinformation.

Then there is the unbalanced power dynamic where AI's models are purposely aligned to "ideologies" and potentially distorted data sets intended to benefit their creators/sponsors to the detriment of everyone else. Power and greed lead to corrupt intent.

F it, we're all doomed.
This is where it is already going in some cases. AI rehashing incorrect information, if all LLM's have that data ingested or injected, then it's all spewing stupidity and things don't go well from there.

AI warfare...some cool stuff that could be used for the betterment of society, and some will crossover, but in the hands of GOV fuckwits running it (aka politicians) is rarely a good thing.

Psychological warfare will go full retard- as if it hasn't already. It's been perfected and around long before the majority of us have been on the planet. Technology just allows a much faster and broader reach and the ability pivot to the next new thing if the desired results aren't achieved.
 
Back
Top