Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      16 hours ago

      Does it run on something that’s modelled on a neural net? Then it’s AI by definition.

      I think you’re confusing AI with “AGI”.

              • NotASharkInAManSuit@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                4 hours ago

                I’m tired of this thread. It’s not an intelligence, I did a whole thing already as to my reasoning why, I’d just prefer we call it VI and stop acting like it can think.

            • CovfefeKills@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              7 hours ago

              You are an idiot I am not playing your game I’ll just call you out on being an idiot. If you came across as genuine I would give you a history lesson but you are just an asshole looking to pick a fight. If you could articulate how exactly knowing of John McCarthy and countless others and their contributions would change anything about what you are doing I would be happy to google that for you.

              • NotASharkInAManSuit@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                6 hours ago

                Where did anyone from the Dartmouth folks identify “AI” as “anything that runs on a “neural network””?

                Edit: Also, I asked two very simple questions. Your response already tells me everything I need to know.

                Edit II: What fucking “game” was I playing by simply asking you to verify your claims?

                • CovfefeKills@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 hours ago

                  Where did anyone from the Dartmouth folks identify “AI” as “anything that runs on a “neural network””?

                  Edit: Also, I asked two very simple questions. Your response already tells me everything I need to know.

                  Edit II: What fucking “game” was I playing by asking you to verify your claims?

                  Lol dude like I said, knowing who wouldn’t change what you are doing.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      17 hours ago

      I don’t understand the desire to argue against the terms being used here when it fits both the common and academic usages of “AI”

      • NotASharkInAManSuit@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        There is no autonomy. It’s just algorithmic data blending, and we don’t actually know how it works. It would be far better described as virtual intelligence than artificial intelligence.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 hours ago

          That kind of depends how you define autonomy. Whichever way, I’m not sure I get how “virtual” is a better descriptor for implying a lack of it than “artificial” is.

          Also by “we don’t actually know how it works” do you mean that we can’t explain why a particular “decision” was made by an AI, or do you mean that we don’t know how AI works in general? If it’s the first that’s generally true, if it’s the second I disagree (we know a lot, but still have a lot to learn).

          • NotASharkInAManSuit@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            Autonomy is something that can think and act according to its own will, “AI” does not have any will of its own, it can strictly do only what it was programmed to do, there is no actual intelligence to it, it’s just a .exe. Artificial intelligence implies something created by means other than a naturally occurrence (an environmental or biological reaction) to create an intelligence, which is to imply it’s the same thing but made in a lab. Virtual intelligence implies a representation of what we imply to be signs of intelligence inside of a controlled space, it does not imply autonomy or a formed intelligence, which is exactly what these things are.

            When AI generates an answer or image or whatever through a neural network we don’t know how it works out what it is doing, we can analyze the input and output, but the exact formula of what the gaggle of algorithms and probability calculations are doing is inherently designed to be random and thus far do not have any reliable predictability, either because the program simply isn’t what we are wanting it to be or that we just don’t understand it yet. There is a Kyle Hill video on generative AI that goes over it better than I can, but avoid the bits of rationalism he tends to drop here and there in his videos anymore. It ties into the whole concept of intelligence and programming, a computer can only do exactly what we tell it to do how we tell it to do it, which is why when we tell it to smash shit together through the mystery box we made with intentionally unpredictable formulas adhered by mathematical analytics, algorithms, and data scraped into different categories it does just that, we know what pieces it can use and how it might use them but not how it will actually use them and what it might hallucinate because of the information meshing together without the program having any way to actually know what it’s looking at or analyzing and interpreting one form of data for another that changes the context of the output. In order for a computer to have intelligence we would have to have a full and quantifiable grasp on intelligence and cognition, and while we have modeled neural networks after what we see in brain activity that only goes as far as what we can see the brain do. We know how a lot of the brain works on a mechanical level, but we have no tangible grasp on how consciousness and intelligence work nor what they are outside of subjective concept and experience. Before we could program intelligence and consciousness we would first have to know what exactly is being coded and programmed to the most minute detail of quantification, it’s a bit foolish to believe we can program something we can’t even grasp, and even more foolish to think that it would be a good idea to blindly try.

            Also, look into rationalism and the zizians, those are the people trying to sell this shit to you. AI, as we are attempting at the current time, is literally cult shit based on a short story by Harlan Ellison. Granted, it’s a good read.