• Wirlocke@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    137
    ·
    5 months ago

    Don’t know if this has been fixed but Gemini was telling people it’s unethical to teach people C++ or memory management.

    Because it’s considered “memory unsafe” but Gemini took it literally and considered it to unsafe to teach.

    • RustyNova@lemmy.world
      link
      fedilink
      arrow-up
      77
      ·
      5 months ago

      proud Rust developer

      Joke aside, everytime people gush over AI, I always have to remind them that AI is just a puppy that learnt how to maximise treats, and not actually understand shit. And this is a perfectly good example.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        5 months ago

        Right??? I’m continually floored by how many genuinely smart people I come across who ignore this concept, which is one of the biggest reasons I just don’t trust LLMs in a general sense. Like sure, I can use them fairly effectively, but the vast majority of the people who interact with LLMs don’t use a level of caution with them that’s appropriate.

        And that doesn’t even touch on the huge ethical (and legal) issues around how LLM devs acquire and use training data.

      • barsoap@lemm.ee
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        5 months ago

        Dogs are way more intelligent than that. LLM tech is basically a way to quickly breed fruit flies to fly right or left when they see a particular pattern.

      • Terrasque@infosec.pub
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        5 months ago

        I mean, I totally agree with you. But that also kinda ignores all the useful things a dog can be trained to do.

        • RustyNova@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          5 months ago

          Oh I’m not talking that it can’t be trained well. That’s not my point.

          Of course dogs can be trained to sniff drugs or find people, the gist of it is that they were trained for this behaviour, and might not understand it like we do.

          A good exemple is a study that research on cancer sniffing dogs had problems with false positives.

          • barsoap@lemm.ee
            link
            fedilink
            arrow-up
            3
            ·
            5 months ago

            The false positive problem actually works in favour of the dogs, here: Their noses are excellent they know exactly whether there’s drugs there or not. They also know that the humans can’t tell so it’s easy to get a treat regardless. And they also know to not overdo it.

            Even more complicated are cats, figures that they are by and large uninterested in being studied or proving anything to you.