The Official Original Artificial Intelligence We're All F***ing Doomed Thread

No worries, I predict full scale nuclear war in 2026 or 2027 , AI will be useless after that. ^^
Thanks for the pep talk!

Thumbs Ok GIF
 
I do think LLM‘s are useful, and I’m not entirely against AI to an extent. But I also think that given these companies are anything but altruistic, there should be more oversight.
You're not wrong that LLMs have (extremely limited) uses. And you're also not wrong that the companies building and running them are not altruistic (it's more accurate to say that they mean us harm, from top to bottom), and that there should indeed be more oversight of all aspects of their operations. The nutjobs who run those companies, however, appear to have largely achieved capture of most of the entities that otherwise might oversee them. The filthy lucre of the bubble is still holding sway. For now.
 
You're not wrong that LLMs have (extremely limited) uses. And you're also not wrong that the companies building and running them are not altruistic (it's more accurate to say that they mean us harm, from top to bottom), and that there should indeed be more oversight of all aspects of their operations. The nutjobs who run those companies, however, appear to have largely achieved capture of most of the entities that otherwise might oversee them. The filthy lucre of the bubble is still holding sway. For now.
Oh yeah, 110%. In fact, they're actively lobbying against regulation (not to get too political). That's why if we're relying on oversight, it'll have to come from somewhere else. Where, I have no idea though, I'm an IT guy not an expert in that topic. It's disconcerting.
 
Oh yeah, 110%. In fact, they're actively lobbying against regulation (not to get too political). That's why if we're relying on oversight, it'll have to come from somewhere else. Where, I have no idea though, I'm an IT guy not an expert in that topic. It's disconcerting.
I work in security and compliance in an extremely high dollar environment, and I used to work in infrastructure at a big tech, where my team helped build 'AI' clusters. We spent so much on Google Cloud that they bought my team and me a fancy dinner every time we crossed paths at conferences. So I've definitely seen a lot how this stuff gets built out, and I know which people are experts - and not the ones currently on the tech bro payrolls. There might be some places that do oversight, but they probably won't really get a chance to come in and clean house until the bubble pops and the money and hype dry up.
 
I work in security and compliance in an extremely high dollar environment, and I used to work in infrastructure at a big tech, where my team helped build 'AI' clusters. We spent so much on Google Cloud that they bought my team and me a fancy dinner every time we crossed paths at conferences. So I've definitely seen a lot how this stuff gets built out, and I know which people are experts - and not the ones currently on the tech bro payrolls. There might be some places that do oversight, but they probably won't really get a chance to come in and clean house until the bubble pops and the money and hype dry up.
I appreciate your insight. I'm just a SysAdmin for a local govt. So I'm not going to see ridiculously high end systems any time soon anyway. :D
 
Not that I’m a fan of theirs either, but Anthropic (sp?) seems to be the only company with a CEO at least slightly concerned about the ramifications of their work. Sam Altman comes across as a straight up sociopath in every interview I’ve seen.
 
I’m not as worried about people getting stupid as I am about the economical part.

Overall, human stupidity is of course driving all this forward, no matter how I want to keep myself out of it I am also contributing in a microscopic way, as everyone else does unwillingly or not. And I guess philosophically that’s the beauty and the beast of all this. The machine was turned on long ago and it’s unstoppable.

For everyday stuff I guess it could settle after a lot of stupid shit gets knocked out and Ai as a tool for us finds its places. One could argue that Ai also could see the problem (human stupidity could be the downfall of Ai itself) and the need to evolve into something useful for us and for itself. A common ground, co existence of sorts.

Economics is more alarming today than Ai tech itself. Again human stupidity.
I repositioned once I realized my old “global” fund had turned into a American tech fund, got to keep an eye on such things… after all it’s my pension there…

it’s not that I’m worried about a crash like people are worried about today… a repeat of the late 90’s bubble blowing up. It’s not the same thing. There’s instead a lot of “crashes” more frequently today it seems, that gets compensated up and down… because of stupidity…from politics and the masses.
In a weird sense global funds is a pro/con in all this because of their nature and operations as more and more people during the last decades has realized saving in funds is a smart way. It was…. Today, maybe not. Depending if one is cautions about where the funds invest…
So yeah, I’m more worried about the more frequent “smaller” crashes that somehow seem to dodge a great crash.

Sorry this turned into an economical thing from me, but someone has to say it…

I had problems very long about principles where I place my funds. Asia/china/emerging markets was a long time out of the question… not so much now. Questionable companies and ethics… unethical biz… make the comparisons today and judge. I went from 90% global (that was mostly USA tech focused now anyway) to 50 % home bias, 30% Europe/emerging markets, 20% American. And I also realized that repositioning more often than once a year is becoming important. Now I tend to lean more into home industry, health and consumerism… than the crazy tech.

Maybe it’s better to build an own individual stocks portfolio…. But that gives me headaches more than everything because of my ocd/autistic tendencies….

(Just thoughts… I’m not by any means an expert in economics. THIS POST SHALL NOT BE CONSIDERED AS ECONOMICAL ADVISE)
 
My concern with AI has very little to do with AI itself and everything to do with our species tendency to immediately weaponize any technological breakthrough into a tool of mass enshitifification.
 
Back
Top