On a sidenote: I fucking LOVE !degenerate@lemmynsfw.com 🤩
Do other generative AI services also do this? Or do they have functioning safeguards?
Usually there are safeguards against nsfw content on public services.
Asking for a friend?
No. I was just interested because I’ve seen a lot of headlines about how Grok creates all these problematic images of women, minors etc., but not about other similar generative AI software. But my understanding so far has been that there’s no real way to safeguard generative AI at all, so I was wondering whether this was a Grok-only problem, and if so, how others were avoiding it.
🤨
Ai makes photoshopping trivially easy, so of course now this is bigger problem than before. This is not going to go away, so society will adapt. Humans can live perfectly fine in the jungle not covering up their unmentionables, so in a few decades this will be a non issue I would guess.
These people and journalists don’t see to remember celebrity photoshops. Or maybe they do and want to continue to feed the outrage machine.
Pardon me if I duck out of my Two Minutes Hate for today.
I don’t understand… They don’t seem to remember it because they’re not mentioning it in their stories? Why would mentioning it not count as “feeding the outrage machine”?
Help me understand.
Nobody is bitching about photoshopping, a thing that exists and almost anybody can do to put some person’s face on a naked body or whatever situation they want. It’s existed for decades. Suddenly, journalists are inventing a new moral panic with LLMs, saying they can do whatever they want with pictures, despite the fact that this technology already existed, it’s just a little bit easier now. It’s not a new problem, so reporting on it is just shifting the blame to a new boogeyman.
See, the magic formula is to slap the word “AI” on a headline and boom, instant attention! It doesn’t matter what it’s about, if it’s a new problem, if it’s only slightly related to the main root cause… As long as you’re talking shit about every angle around AI in the most extreme ways possible, mission accomplished. It is outrage reporting because there is no solutioning or historical context. The sole purpose is the outrage, because outrages generates clicks. It’s too hard for journalists to think outside the outrage box.
You might be interested in 404 media’s coverage. They’ve been reporting on it for a while – it’s been part of Sam Cole’s beat since the vice days.
i thought the photo in this thumbnail looked familiar, and then realized it’s because I just saw a post from her a minute ago - one of many linking to a very old video of a large crowd in Venezuela falsely claiming that it shows people celebrating the US kidnapping Maduro today.
4chan has been doing this for like a decade, why only now is it getting attention?
I assume it’s for yet another push to restrict access to even more of the internet.
Probably bc you had to have the barest modicum of photoshop skills back in the day to pull it off, but now any idiot can upload a pic and click the nekkid button
Photoshop was for 20 years ago. There were automated tools by the late 2010s. Results were mixed, I threw a picture of Trump in and then threw up.
4chan is a relatively sequestered part of the internet, and doing this required a level of skill.
There’s a reason people freaked out when deepfakes - first photos then videos - started appearing as it lowered the level of entry for creeps. No longer did you need to learn to use a tool through hundreds of hours of not well documented features to achieve nudes, but even the dumbest dipshit with minimal tech skills and money could download the right models, and either rent or buy the right hardware and get lifelike deepfakes, photos or videos, within minutes or hours…
That quickly passed fortunately, but then came the combined agentic models that could use input images, LLMs, and other generative AI to replace the previously lengthy processes with a much quicker approach through cloud hardware.
Aka grok. Which, let’s be honest, is an even more idiot-proof solution as all it requires is a Twitter account and marginal prompt engineering skills (which is quite well documented already), so even the worst trolls can easily get going (and here by worst I mean the wannabe trolls who aren’t even good enough to be basic trolls).
Yet again the level of entrance was lowered significantly.
As if being naked is unhuman…
The way the many societies treat nudity is a problem, but that is a wholly separate issue from people getting ‘undressed’ against their will.
But it’s not her, she was never “undressed against her will”, the cops do that with strip searches, or if you go to prison but no popular uproar against that, so presumably folks are ok with others being made naked by force but once again, this is not that.
The much down voted op you replied to is right, the issue is societal puratism. This is like they put her head on Daisy Duck. Would anyone but WB give a fuck in that instance ?
Yes. Strip searches suck. This is about publically sexualising people against their will. That also sucks.
And some people seem to be hell-bent to put much more scrutiny on the person being sexualised than on the person sexualising them.








