Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
Most arguments people make against AI are in my opinion actually arguments against capitalism. Honestly, I agree with all of them, too. Ecological impact? A result of the extractive logic of capitalism. Stagnant wages, unemployment, and economic dismay for regular working people? Gains from AI being extracted by the wealthy elite. The fear shouldn’t be in the technology itself, but in the system that puts profit at all costs over people.
Data theft? Data should be a public good where authors are guaranteed a dignified life (decoupled from the sale of their labor).
Enshittification, AI overview being shoved down all our throats? Tactics used to maximize profits tricking us into believing AI products are useful.
I think you are right and yet so wrong.
My problems with ai aren’t unique and I am no special snowflake who see through the matrix while everyone else is distracted. I am just a dude. I really doubt that the points tell I will bring up, is anything boring as generic arguments against ai.
My problem with data theft is not based on the concern that artists has the rights on their work. I want them rewarded for your labor but in this case, it is not my primary issue. It is the hypocritical nature of company entirely based in IP law stealing IP protected work. I hate that the system is not ripping them into piece like Nintendo rips an online super smash tournament into pieces. It is so obviously “rules for thee, not for me”. You can claim that capitalism is causing that but I really don’t think capitalism requires this shit. Sure the rich and powerful are rich and powerful in capitalism because of capitalism, but special pledging for the elite existed in every system we have tried.
I hate ai because people invest in the dumbest applications for it. LLM are trash. Voice cloner??? Wtf. Image generation? Why?? But for medical applications in which we have comparably amazing clean data, let’s invest into that a little bit. But x billions into LLMs please.
I hate ai because the most brain dead application gets the most usage and people will tell you how it is bad but use it anyway. Then they obviously don’t have the computing power to run a decent local model and just pipe any personal or confidential information into the online service that tells you that the data will be used for training, so it can be leaking back out to other people.
I hate ai because it is literally everything bad about society (e.g. nonconsental nudes) and tech (e.g. data collectors) and their interaction.
AI is just a tool like anything else. What’s the saying again? "AI doesn’t kill people, capitalism kills people?
I do AI research for climate and other things and it’s absolutely widely used for so many amazing things that objectively improve the world. It’s the gross profit-above-all incentives that have ruined “AI” (in quotes because the general public sees AI as chatbots and funny pictures, when it’s so much more).
The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.
There is literally no intelligence in a climate model. It’s just data + statistics + compute. Please stop participating in the pseudo-scientific grift.
And this is where you show your ignorance. You’re using the colloquial definition for intelligence and applying incorrectly.
By definition, a worm has intelligence. The academic, or biological, definition of intelligence is the ability to make decisions based on a set of available information. It doesn’t mean that something is “smart”, which is how you’re using it.
“Artificial Intelligence” is a specific definition we typically apply to an algorithm that’s been modelled after the real world structure and behaviour of neurons and how they process signals. We take large amounts of data to train it and it “learns” and “remembers” those specific things. Then when we ask it to process new data it can make an “intelligent” decision on what comes next. That’s how you use the word correctly.
Your ignorance didn’t make you right.
Except the Neural Net model doesn’t actually reproduce everything real, living neurons do. A mathematician in the 70s said, “hey what if this is how brains work?” He didn’t actually study brains, he just put forward a model. It’s a useful model. But it’s also an extreme misrepresentation to say it approximates actual neurons.
lol ok buddy you definitely know more than me
FWIW I think you’re conflating AGI with AI, maybe learn up a little
The term AGI had to be coined because the things they called AI weren’t actually AI. Artificial Intelligence originates from science fiction. It has no strict definition in computer science!
Maybe you learn up a little. Go read Isaac Asimov
We have the term AGI because we sometimes want to communicate something more specific, and AI is too broad of a term.
lol Again, you definitely know more than me
I always get such a kick reading comments from extremely overly confident people who know nothing about a topic that I’m an expert in, it’s really just peak social media entertainment
Please tell me you don’t actually think “AGI” is possible.
Are you talking about AI or LLM branded as LLM?
Actual AI is accurate and efficient because it is designed for specific tasks. Unlike LLM which is just fancy autocomplete.
LLMs are part of AI, so I think you’re maybe confused. You can say anything is just fancy anything, that doesn’t really hold any weight. You are familiar with autocomplete, so you try to contextualize LLMs in your narrow understanding of this tech. That’s fine, but you should actually read up because the whole field is really neat.
Literally, LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage. Same fundamental mathematics under the hood, but given a humongous scope.
That’s not true.
How is this untrue? Generative pre-training is literally training the model to predict what might come next in a given text.
That’s not what an LLM is. That’s part of how it works, but it’s not the whole process.
They never claimed that it was the whole thing. Only that it was part of it.
Even llms are useful for coding, if you keep it in its auto complete lane instead of expecting it to think for you
Just don’t pay a capitalist for it, a tiny, power efficient model that runs on your own pc is more than enough
Yes technology can be useful but that doesn’t make it “intelligent.”
Seriously why are people still promoting auto-complete as “AI” at this point in time? It’s laughable.
You might keep hearing people say this, but that doesn’t make it true (and it isn’t true).
FTFY.
I’ve seen it said somewhere that, with the advent of AI, society has to embrace UBI or perish, and while that’s an exaggeration it does basically get the point across.
I don’t think that AI is as disruptive as the steam engine, or the automatic loom, or the tractor. Yes, some people will lose their jobs (plenty of people have already) but the amount of work that can be done which will benefit society is near infinite. And if it weren’t, then we could all just work 5% fewer hours to make space for 5% unemployment reduction. Unemployment only exists in our current system to threaten the employed with.
You might be right about the relative impact of AI alone, but there are like a dozen different problems threatening the job market all at once. Added up, I do think we are heading towards a future where we have to start rethinking how our society handles employment.
A world where robots do most of the hard work for us ought to be a utopia, but as you say, capitalism uses unemployment as a threat. If you can’t get a job, you starve and die. That has to change in a world where we’ll have far more people than jobs.
And I don’t think it’s as simple as just having us all work less hours - every technological advancement that was once said would lead to shorter working hours instead only ever led to those at the top pocketing the surplus labor.
Yes, I 100% agree with you. The ‘working less’ solution was just meant as a simple thought exercise to show that with even a relatively small change, we could eliminate this huge problem. Thus the fact that the system works in this way is not an accident.