No. I was just interested because I’ve seen a lot of headlines about how Grok creates all these problematic images of women, minors etc., but not about other similar generative AI software. But my understanding so far has been that there’s no real way to safeguard generative AI at all, so I was wondering whether this was a Grok-only problem, and if so, how others were avoiding it.
Asking for a friend?
No. I was just interested because I’ve seen a lot of headlines about how Grok creates all these problematic images of women, minors etc., but not about other similar generative AI software. But my understanding so far has been that there’s no real way to safeguard generative AI at all, so I was wondering whether this was a Grok-only problem, and if so, how others were avoiding it.