• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    5 hours ago

    Nice study. But I think they’ve should have mentioned some more context. Yesterday people were complaining the models won’t talk about the CCP, or Winnie the Pooh. And today the lack of censorship is alarming… Yeah, so much about that… And by the way, censorship isn’t just a thing in the bare models. Meta OpenAI etc all use frameworks and extra software around the models themselves to check input and output. So it isn’t really fair to compare a pipeline with AI safety factored in, to a bare LLM.

    • killingspark@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      This isn’t about lack of censorship. The censorship is obviously there, it’s just implemented badly.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 hours ago

        I know. This isn’t the first article about it. IMO this could have been done deliberately. They just slapped on something with a minimal amount of effort to pass Chinese regulation and that’s it. But all of this happens in a context, doesn’t it? Did the scientists even try? What’s the target use-case and the implications on usage? And why is the baseline something that doesn’t really compare, plus the only category missing, where they did some censorship? I’m just saying, with that much information missing, it’s a bold claim to come up with numbers like 100% and saying it’s alarming.

        (And personally, I’d say these numbers show how these additional safeguards work. You can see how LLMs with nothing in front of them (like Llama405 or Deepseek) fail, and the ones with additional safeguards do way better.)