• Alex@lemmy.ml
    link
    fedilink
    arrow-up
    20
    ·
    5 天前

    So algorithms then?

    LLMs have some interesting properties and certainly can do a good job sifting through large amounts of raw data. They are however a very brute force approach compared to say a network routing protocol. Sooner or later people will start to realise (again) that engineering is about trade offs and you need to work out what your constraints are and stop trying to solve every problem with massive amounts of multiplication.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      5 天前

      Basically, the idea is to use a symbolic logic engine within a dynamic context created by the LLM. Traditionally, the problem with symbolic AI has been with creating the ontologies. You obviously can’t have a comprehensive ontology of the world because it’s inherently context dependent, and you have an infinite number of ways you can contextualize things. What neurosymbolics does is use LLMs for what they are good at, which is classifying noisy data from the outside world, and building a dynamic context. Once that’s done, it’s perfectly possible to use a logic engine to solve problems within that context.

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 天前

      Full circle. After a big orgy of trying to make ever larger word guessing engines write software we rediscover that computers are fundementally logic machines (and also word guessers were never intelligent)