The Official Original Artificial Intelligence We're All F***ing Doomed Thread

I’ve seen a lot of people post about how amazing the new AI tech is for creating music, then you hear the fruits of their labour like that country song that went number one, and I’m sorry but it all just sounds like absolute tripe to me.

It’s completely basic, hackneyed, has obvious glitches in the audio, most notably the vocals which completely give the game away that it’s AI and I just don’t get it.

Can anybody here post any examples of AI generated music that even comes close to quality music that’s produced by humans?
Guess I’ll put AI music back into the “load of old shite” category for now then!
 
Guess I’ll put AI music back into the “load of old shite” category for now then!

Anything I’ve purposely listened to sounded weird, generic, and not having much feeling. But then again, perhaps I’ve not heard the right songs. I’m not one to seek out stuff like this though, I stick with artists I know. And unless any of them have gone to AI, I guess I’ve not heard much.
 
To me the biggest risk of AI is by being used by people who don't have the domain knowledge about the subject they are asking about.

I'm a programmer. So far I can't use AI in my work because I'm working on a project that deals with sensitive data, so I'm a bit behind the times in this regard.

But I have read so many reports of people using, and trusting the work generated by AI. Like programmers, who cannot explain what the code churned out by AI does, but will confidently try to push it to a shared repository. Which would mean they are not only bad at programming, but they are an unnecessary cog in the machine who could be replaced by a middle manager, or even the CEO.

This then extends to people asking about subjects in a search engine, and trusting the AI-generated result over what e.g Wikipedia or some other curated source says. Sometimes the AI result is right, sometimes it's way off. Worst case scenario it's that poisonous berries thing where it actually causes harm because people trusted a fundamentally untrustworthy system.

Then we get to the risks of people becoming AI-dependent. People already seem to be terrible at searching for information themselves, instead asking things on social media services that they could answer with a simple search engine query. Take this further and if the AI services are not accessible...they'll likely panic because they no longer know how to make decisions without asking AI.
 
1763220012024.png


Finally we have the financial side of it. None of the AI companies are profitable atm. What happens when they enter the phase where they have to be profitable? That ChatGPT "29 € / user / month" billing for businesses is going to be raised through the roof. By that time many businesses are entrenced with a particular AI product and don't have a good option to move to something else.

Atm AI companies are backed by basically a circlejerk of "Nvidia invests in company A and A buys hardware from Nvidia" and billionaires pouring in cash hoping they'll hit it big. It has 1990s IT-bubble vibes to it, where eventually a lot of companies will collable and the few that were doing something relevant will survive. As an Nvidia stock owner I'm laughing all the way to the bank until the day when the bubble bursts.
 
Are you talking about AI music, modern country or modern pop? Hard to tell!

Fair point. 🤣 but I mean genres I listen to, mainly progressive rock. While someone not heavily into the genre might say “yup, sounds like that” for me (and friends) it doesn’t sound right.
 
Despite the fact that my job is probably one of the first to go under the axe - since it involves writing and editing content - I really think that AI has the power to augment human intelligence in the way we used to see those TV shows and movies with bionic humans.

A classic example is the Covid-19 pandemic, where drugs were developed in record times and the first modern day plague of the 21st century was brought under control. I don't want to get into an argument about the pros and cons of vaccines, but they were developed using AI even if the benefits are still being debated.

Another example is the way the process of content creation has completely changed. We're still in the early days, but it is conceivable that movies that used to cost hundreds of millions to make will just cost a few millions going forward and will look better than what CGI and other techniques can render today.

Sadly, I don't think that's the direction we're headed.

We all know who controls AI: it's massive corporations in the US, in China, in Japan, in South Korea, in Europe, in India. And these conglomerates are the ones who have the financial muscle to ensure that legislation passed by policymakers is in their benefit.

Where are you seeing the biggest example of that? It's all the content that is being used to train the AI models. It didn't just materialise. Everything written by everyone, every bit of music, every book, every play, every scientific research paper is being fed into these AIs.

And no one gets a red cent other than the conglomerates. And, by extension, politicians.

I think the problem that politicians do not see is the large-scale social unrest that job losses and wealth disparity is going to trigger.

To me, it seems likely that they are well aware of that scenario and are probably rushing to build armies of robots to quell any unrest.

At any rate, that would make a great movie.

Here's a picture I created using AI:

You Shall Not Pass.jpg



Looks fabulous, I thought, until I saw that both kids had one hand chopped off.

No fear, I went into the picture and gave the kids back their hands.


You Shall Not Pass 2.jpg



The merits of the picture notwithstanding - I call it "You shall not pass" - I would love to see a situation where artificial intelligence is designed to supercharge humanity, rather than replace it.
 
To me the biggest risk of AI is by being used by people who don't have the domain knowledge about the subject they are asking about.
But I have read so many reports of people using, and trusting the work generated by AI. Like programmers, who cannot explain what the code churned out by AI does, but will confidently try to push it to a shared repository. Which would mean they are not only bad at programming, but they are an unnecessary cog in the machine who could be replaced by a middle manager, or even the CEO.

Here there's a huge push for "everyone to be a builder" - so my team mate asked - so "then the CEO will eventually just push a button then?"


I'm a programmer. So far I can't use AI in my work because I'm working on a project that deals with sensitive data, so I'm a bit behind the times in this regard.
Completely opposite here - we have access to Claude (including sonnet 4-5), Amazon Q, Cursor, Kiro - there's a befuddling amount of automation already happening and including end to end - it has to be supervised but today we are the point that complete unit tests are being added automatically end to end starting from a JIRA ticket all the way to code review stage - successfully I may add.


This then extends to people asking about subjects in a search engine, and trusting the AI-generated result over what e.g Wikipedia or some other curated source says. Sometimes the AI result is right, sometimes it's way off. Worst case scenario it's that poisonous berries thing where it actually causes harm because people trusted a fundamentally untrustworthy system.
That already is happening - much like what happened with search, just blindly trusting some random article in the web.

Then we get to the risks of people becoming AI-dependent. People already seem to be terrible at searching for information themselves, instead asking things on social media services that they could answer with a simple search engine query. Take this further and if the AI services are not accessible...they'll likely panic because they no longer know how to make decisions without asking AI.
 
I think the problem that politicians do not see is the large-scale social unrest that job losses and wealth disparity is going to trigger.
This is what my primary concern is, and something I don't think @molul understood I was getting at (perhaps I wasn't very clear), in the Helix Stadium thread. I'm not an old man "afraid of new technology". But the difference with AI is the speed and scale at which it's going to affect jobs. Anyone not at least a little concerned about this is incredibly naive.
 
Last edited:
Despite the fact that my job is probably one of the first to go under the axe - since it involves writing and editing content - I really think that AI has the power to augment human intelligence in the way we used to see those TV shows and movies with bionic humans.

A classic example is the Covid-19 pandemic, where drugs were developed in record times and the first modern day plague of the 21st century was brought under control. I don't want to get into an argument about the pros and cons of vaccines, but they were developed using AI even if the benefits are still being debated.
Yeah that's the kind of stuff where it is a sensible new tool to use. Data processing and trial-and-error at a level that humans are not able to do easily.

Another example is the way the process of content creation has completely changed. We're still in the early days, but it is conceivable that movies that used to cost hundreds of millions to make will just cost a few millions going forward and will look better than what CGI and other techniques can render today.
More likely is that movies will start simply looking worse and wrong in many ways. The problem with a lot of AI generation is that it's not good at creating anything new, but rehashing existing stuff and even that tends to come with a lot of glitches. There's no savings to be had if someone still has to go over it to correct all that.

The new Call of Duty game apparently uses AI for a lot of assets and it's still something that was expensive to make. They bank on players not caring a whole lot for actual quality.

Advertising is going straight for AI slop too, because they can now save the wages of ad creators and the CEO can put out their stupid ideas with a few prompts.

Sadly, I don't think that's the direction we're headed.

We all know who controls AI: it's massive corporations in the US, in China, in Japan, in South Korea, in Europe, in India. And these conglomerates are the ones who have the financial muscle to ensure that legislation passed by policymakers is in their benefit.

Where are you seeing the biggest example of that? It's all the content that is being used to train the AI models. It didn't just materialise. Everything written by everyone, every bit of music, every book, every play, every scientific research paper is being fed into these AIs.

And no one gets a red cent other than the conglomerates. And, by extension, politicians.

I think the problem that politicians do not see is the large-scale social unrest that job losses and wealth disparity is going to trigger.

To me, it seems likely that they are well aware of that scenario and are probably rushing to build armies of robots to quell any unrest.
Armies of robots seems like it's too expensive. Armies of cheap drones? More likely.
 
Meanwhile over at Chat-GPT.

I'm considering if I want a Solar E1.7 Priestess Explorer type guitar since it's on sale on their website. I wanted to see how it looks sizewise compared to my ESP/LTD KH-V V style. I have already done this comparison in like 10 minutes using Affinity Photo, but decided to see what AI can do. Well...

Screenshot 2025-11-15 at 19.56.08.png
Screenshot 2025-11-15 at 19.59.29.png


Soo...basically do almost all the work? The work I already did myself in far less time? Plus the AI was able to recognize that I was talking about very specific guitar models here...
 
View attachment 54875

Finally we have the financial side of it. None of the AI companies are profitable atm. What happens when they enter the phase where they have to be profitable? That ChatGPT "29 € / user / month" billing for businesses is going to be raised through the roof. By that time many businesses are entrenced with a particular AI product and don't have a good option to move to something else.

Atm AI companies are backed by basically a circlejerk of "Nvidia invests in company A and A buys hardware from Nvidia" and billionaires pouring in cash hoping they'll hit it big. It has 1990s IT-bubble vibes to it, where eventually a lot of companies will collable and the few that were doing something relevant will survive. As an Nvidia stock owner I'm laughing all the way to the bank until the day when the bubble bursts.
Yes, it's a massive bubble.

Also, most of what is hyped as 'AI' (specifically generative code extruding text and images) is not and can never be the "takes over the world" AGI/superhuman intelligence. Nobody has any idea how to generate such a piece of software, if indeed one is possible at all. Many other things that are called 'AI' are just code and/or automation, no different than code that came before it.

We're in the midst of yet another hype cycle. There are a few uses for some aspects of generative code. Bubble con men are prone to point to certain interesting research applications of machine learning (ML) code, trying to make their scammy offerings seem better by association with them. Any such association is fabricated.

A Large Language Model is just a probabilistic text generator. People ascribe intelligence to it due to a bug in human cognition that assumes anything that generates text possesses intellect. Way back in the 1960s, people who should have known better believed a piece of code called ELIZA that answered text with text had developed consciousness.
 
Last edited:
A Large Language Model is just a probabilistic text generator. People ascribe intelligence to it due to a bug in human cognition that assumes anything that generates text. Way back in the 1960s, people who should have known better believed a piece of code called ELIZA that answered text with text had developed consciousness.
It opens up interesting questions about humanity and communication though. LLMs start to be at a level where they are quite convincing, and hard to discern from someone who writes succintly online. To the point that some people will think they are seeing AI answers if someone posts with bullet points or bolded text, even though the AI is just trained on all the literature written by humans.

Here in Finland there's a meme about a Russian propaganda bot/troll farmer on Twitter posting "NATO can't save Finland" translated to "NATO ei voi tallentaa Suomea", which is a literal translation...except it means "NATO can't save Finland to a <file/disk/hard drive>" in Finnish. :ROFLMAO:

With LLMs, this sort of influencing will become a lot more convincing.
 
Back
Top