TSJMajesty
Rock Star
- Messages
- 7,728
Is AI far enough along to where it could tell me when my job will be replaced?
Is AI far enough along to where it could tell me when my job will be replaced?
Guess I’ll put AI music back into the “load of old shite” category for now then!I’ve seen a lot of people post about how amazing the new AI tech is for creating music, then you hear the fruits of their labour like that country song that went number one, and I’m sorry but it all just sounds like absolute tripe to me.
It’s completely basic, hackneyed, has obvious glitches in the audio, most notably the vocals which completely give the game away that it’s AI and I just don’t get it.
Can anybody here post any examples of AI generated music that even comes close to quality music that’s produced by humans?
Guess I’ll put AI music back into the “load of old shite” category for now then!
Anything I’ve purposely listened to sounded weird, generic, and not having much feeling.
Are you talking about AI music, modern country or modern pop? Hard to tell!
To me the biggest risk of AI is by being used by people who don't have the domain knowledge about the subject they are asking about.
But I have read so many reports of people using, and trusting the work generated by AI. Like programmers, who cannot explain what the code churned out by AI does, but will confidently try to push it to a shared repository. Which would mean they are not only bad at programming, but they are an unnecessary cog in the machine who could be replaced by a middle manager, or even the CEO.
Completely opposite here - we have access to Claude (including sonnet 4-5), Amazon Q, Cursor, Kiro - there's a befuddling amount of automation already happening and including end to end - it has to be supervised but today we are the point that complete unit tests are being added automatically end to end starting from a JIRA ticket all the way to code review stage - successfully I may add.I'm a programmer. So far I can't use AI in my work because I'm working on a project that deals with sensitive data, so I'm a bit behind the times in this regard.
That already is happening - much like what happened with search, just blindly trusting some random article in the web.This then extends to people asking about subjects in a search engine, and trusting the AI-generated result over what e.g Wikipedia or some other curated source says. Sometimes the AI result is right, sometimes it's way off. Worst case scenario it's that poisonous berries thing where it actually causes harm because people trusted a fundamentally untrustworthy system.
Then we get to the risks of people becoming AI-dependent. People already seem to be terrible at searching for information themselves, instead asking things on social media services that they could answer with a simple search engine query. Take this further and if the AI services are not accessible...they'll likely panic because they no longer know how to make decisions without asking AI.
This is what my primary concern is, and something I don't think @molul understood I was getting at (perhaps I wasn't very clear), in the Helix Stadium thread. I'm not an old man "afraid of new technology". But the difference with AI is the speed and scale at which it's going to affect jobs. Anyone not at least a little concerned about this is incredibly naive.I think the problem that politicians do not see is the large-scale social unrest that job losses and wealth disparity is going to trigger.
Yeah that's the kind of stuff where it is a sensible new tool to use. Data processing and trial-and-error at a level that humans are not able to do easily.Despite the fact that my job is probably one of the first to go under the axe - since it involves writing and editing content - I really think that AI has the power to augment human intelligence in the way we used to see those TV shows and movies with bionic humans.
A classic example is the Covid-19 pandemic, where drugs were developed in record times and the first modern day plague of the 21st century was brought under control. I don't want to get into an argument about the pros and cons of vaccines, but they were developed using AI even if the benefits are still being debated.
More likely is that movies will start simply looking worse and wrong in many ways. The problem with a lot of AI generation is that it's not good at creating anything new, but rehashing existing stuff and even that tends to come with a lot of glitches. There's no savings to be had if someone still has to go over it to correct all that.Another example is the way the process of content creation has completely changed. We're still in the early days, but it is conceivable that movies that used to cost hundreds of millions to make will just cost a few millions going forward and will look better than what CGI and other techniques can render today.
Armies of robots seems like it's too expensive. Armies of cheap drones? More likely.Sadly, I don't think that's the direction we're headed.
We all know who controls AI: it's massive corporations in the US, in China, in Japan, in South Korea, in Europe, in India. And these conglomerates are the ones who have the financial muscle to ensure that legislation passed by policymakers is in their benefit.
Where are you seeing the biggest example of that? It's all the content that is being used to train the AI models. It didn't just materialise. Everything written by everyone, every bit of music, every book, every play, every scientific research paper is being fed into these AIs.
And no one gets a red cent other than the conglomerates. And, by extension, politicians.
I think the problem that politicians do not see is the large-scale social unrest that job losses and wealth disparity is going to trigger.
To me, it seems likely that they are well aware of that scenario and are probably rushing to build armies of robots to quell any unrest.
Armies of robots seems like it's too expensive. Armies of cheap drones? More likely.
Clearly. I tried giving it links to the guitars...and it then made an image much like the ones above but one of them was a V.Chat GPT is one of the worst AI choices for images.
Yes, it's a massive bubble.View attachment 54875
Finally we have the financial side of it. None of the AI companies are profitable atm. What happens when they enter the phase where they have to be profitable? That ChatGPT "29 € / user / month" billing for businesses is going to be raised through the roof. By that time many businesses are entrenced with a particular AI product and don't have a good option to move to something else.
Atm AI companies are backed by basically a circlejerk of "Nvidia invests in company A and A buys hardware from Nvidia" and billionaires pouring in cash hoping they'll hit it big. It has 1990s IT-bubble vibes to it, where eventually a lot of companies will collable and the few that were doing something relevant will survive. As an Nvidia stock owner I'm laughing all the way to the bank until the day when the bubble bursts.
It opens up interesting questions about humanity and communication though. LLMs start to be at a level where they are quite convincing, and hard to discern from someone who writes succintly online. To the point that some people will think they are seeing AI answers if someone posts with bullet points or bolded text, even though the AI is just trained on all the literature written by humans.A Large Language Model is just a probabilistic text generator. People ascribe intelligence to it due to a bug in human cognition that assumes anything that generates text. Way back in the 1960s, people who should have known better believed a piece of code called ELIZA that answered text with text had developed consciousness.