• DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    I mean, it’s fundamental to LLM technology that they listen to user inputs. Those inputs are probablistic in terms of their effects on outputs… So you’re always going to be able to manipulate the outputs, which is kind of the premise of the technology.

    It will always be prone to that sort of jailbreak. Feed it vocab, it outputs vocab. Feed it permissive vocab, it outputs permissive vocab.

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 hours ago

      Ok? Either openai knows that and lies about their capabilities, or they don’t know it and are incompetent. That’s the real story here.

      • crumbguzzler5000@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        I think the answer is that they are incompetent but also that they are lying about their capabilities. Why else have they rushed everything so much and promised so much?

        They don’t really care about the fallout, they are just here to make big promises and large amounts of money on their shiny new tech.