• zeca@lemmy.eco.br
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    2 days ago

    They could be programmed to do some double/triple checking, and return “i dont know” when the checks are negative. I guess that would compromise the apparence of oracle that their parent companies seem to dissimulately push onto them.

    • sip@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      2 days ago

      they don’t check. you gotta think in statistics terms.

      based on the previously inputed words (tokens actually, but I’ll use words for the sake of simplicity), which is the system prompt + user prompt, the LLM generates a list of the next possible words that makes most sense, then picks one from the top few. How much it goes down the list on lower possible words is based on temperature configuration. Then the next word, and the next, etc, each time looking back.

      I haven’t checked on the reasoning models, what that step actually does, but I assume it just expands the user prompt to fill in stuff that thr LLM thinks the user was lazy to input, then works on the final answer.

      so basically is like tapping on your phone keyboard next word prediction.

      • zeca@lemmy.eco.br
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        The chatbots are not just LLMs though. They run scripts in which some steps are queries to an LLM.

          • zeca@lemmy.eco.br
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 day ago

            That the script could incorporate some checking mechanisms and implement an “i dont know” for when the LLMs answers fails some tests.

            They already do some of that but for other purposes, like censoring, or as by recent news, grok looks up musks opinions before answering questions, or to make more accurate math calculations they actually call a normal calculator, and so on…

            They could make the LLM produce an answer A, then look up the question on google and ask that LLM to “compare” answer A with the main google results looking for inconsistencies and then return “i dont know” if its too inconsistent. Its not a rigorous test, but its something, and im sure the actual devs of those chatbots could make something much better than my half baked idea.