I know there’s other plausible reasons, but thought I’d use this juicy title.

What does everyone think? As someone who works outside of tech I’m curious to hear the collective thoughts of the tech minds on Lemmy.

  • Night Monkey@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    25
    ·
    edit-2
    1 year ago

    OpenAI is playing it way too safe. They’re afraid of hurting peoples feelings and won’t touch many topics. Waiting for an AI with a sense of humor and isn’t programmed to be a coward

      • taladar@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        I think that is really the big, dirty secret of the AI industry right now, that they are not that great at producing intentional outcomes, it is all a lot of trial and error because nobody has a real understanding of how to change things incrementally without side-effects in other parts of the behaviour.

        • donuts@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          It’s almost as if machine learning is a black box that you superimpose massive amounts of random data onto.

    • Sekrayray@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      Probably all done in the name of alignment. We only really have one shot to make an AGI that doesn’t kill everyone (or do other weird unaligned stuff).

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I think we need to start distinguishing better between AGI and ASI. We may have only one shot at ASI (though that’s hard to predict since it’s inherently something unknowable at the current time) but AGI will be “just this guy, you know?” I don’t see why a murderous rogue AGI would be harder to put down than a murderous rogue human.

        • Sekrayray@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Absolutely true. Thanks for the distinction.

          I think maybe the argument could be made that AGI’s could expedite the creation of singularity, but you are correct in saying that the alignment problems matters less with rudimentary AGI.