• irmoz@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I’ll preface by saying I agree that AI doesn’t really “know” anything and is just a randomised Chinese Room. However…

    Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn’t know things, you’re basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can’t just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can’t- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant.

      That is not what I said. In fact, it is the opposite of what I said.

      I said that treating the discussion of LLMs as a philosophical one is giving ‘knowing’ in the discussion of LLMs more nuance than it deserves.

      • irmoz@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 days ago

        I never said discussing LLMs was itself philosophical. I said that as soon as you ask the question “but does it really know?” then you are immediately entering the territory of the theory of knowledge, whether you’re talking about humans, about dogs, about bees, or, yes, about AI.