I know there’s other plausible reasons, but thought I’d use this juicy title.
What does everyone think? As someone who works outside of tech I’m curious to hear the collective thoughts of the tech minds on Lemmy.
I know there’s other plausible reasons, but thought I’d use this juicy title.
What does everyone think? As someone who works outside of tech I’m curious to hear the collective thoughts of the tech minds on Lemmy.
OpenAI is playing it way too safe. They’re afraid of hurting peoples feelings and won’t touch many topics. Waiting for an AI with a sense of humor and isn’t programmed to be a coward
They’ve tried that, the robots act like your average racist edgelord teen.
I think that is really the big, dirty secret of the AI industry right now, that they are not that great at producing intentional outcomes, it is all a lot of trial and error because nobody has a real understanding of how to change things incrementally without side-effects in other parts of the behaviour.
It’s almost as if machine learning is a black box that you superimpose massive amounts of random data onto.
This sounds like the take of an average Ben Shapiro viewer
Ben Shapiro is a moron.
Just think what that says about you for a sec.
Because I want an AI that isn’t gonna censor itself? 😂
No, because you come off like a Ben Shapiro fan.
Microsoft’s Tay AI sounds right up your alley.
Well, you can always use GPT-4chan…
What monster made that? Glad they thought better of it.
Probably all done in the name of alignment. We only really have one shot to make an AGI that doesn’t kill everyone (or do other weird unaligned stuff).
I think we need to start distinguishing better between AGI and ASI. We may have only one shot at ASI (though that’s hard to predict since it’s inherently something unknowable at the current time) but AGI will be “just this guy, you know?” I don’t see why a murderous rogue AGI would be harder to put down than a murderous rogue human.
Absolutely true. Thanks for the distinction.
I think maybe the argument could be made that AGI’s could expedite the creation of singularity, but you are correct in saying that the alignment problems matters less with rudimentary AGI.