Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
I don’t understand the desire to argue against the terms being used here when it fits both the common and academic usages of “AI”
There is no autonomy. It’s just algorithmic data blending, and we don’t actually know how it works. It would be far better described as virtual intelligence than artificial intelligence.
That kind of depends how you define autonomy. Whichever way, I’m not sure I get how “virtual” is a better descriptor for implying a lack of it than “artificial” is.
Also by “we don’t actually know how it works” do you mean that we can’t explain why a particular “decision” was made by an AI, or do you mean that we don’t know how AI works in general? If it’s the first that’s generally true, if it’s the second I disagree (we know a lot, but still have a lot to learn).
Autonomy is something that can think and act according to its own will, “AI” does not have any will of its own, it can strictly do only what it was programmed to do, there is no actual intelligence to it, it’s just a .exe. Artificial intelligence implies something created by means other than a naturally occurrence (an environmental or biological reaction) to create an intelligence, which is to imply it’s the same thing but made in a lab. Virtual intelligence implies a representation of what we imply to be signs of intelligence inside of a controlled space, it does not imply autonomy or a formed intelligence, which is exactly what these things are.
When AI generates an answer or image or whatever through a neural network we don’t know how it works out what it is doing, we can analyze the input and output, but the exact formula of what the gaggle of algorithms and probability calculations are doing is inherently designed to be random and thus far do not have any reliable predictability, either because the program simply isn’t what we are wanting it to be or that we just don’t understand it yet. There is a Kyle Hill video on generative AI that goes over it better than I can, but avoid the bits of rationalism he tends to drop here and there in his videos anymore. It ties into the whole concept of intelligence and programming, a computer can only do exactly what we tell it to do how we tell it to do it, which is why when we tell it to smash shit together through the mystery box we made with intentionally unpredictable formulas adhered by mathematical analytics, algorithms, and data scraped into different categories it does just that, we know what pieces it can use and how it might use them but not how it will actually use them and what it might hallucinate because of the information meshing together without the program having any way to actually know what it’s looking at or analyzing and interpreting one form of data for another that changes the context of the output. In order for a computer to have intelligence we would have to have a full and quantifiable grasp on intelligence and cognition, and while we have modeled neural networks after what we see in brain activity that only goes as far as what we can see the brain do. We know how a lot of the brain works on a mechanical level, but we have no tangible grasp on how consciousness and intelligence work nor what they are outside of subjective concept and experience. Before we could program intelligence and consciousness we would first have to know what exactly is being coded and programmed to the most minute detail of quantification, it’s a bit foolish to believe we can program something we can’t even grasp, and even more foolish to think that it would be a good idea to blindly try.
Also, look into rationalism and the zizians, those are the people trying to sell this shit to you. AI, as we are attempting at the current time, is literally cult shit based on a short story by Harlan Ellison. Granted, it’s a good read.