Apart from not being that interesting for now, the first line of defence for most is manually-approved sign ups, as far as I can tell.
When the Fediverse grows, I think that weeding out accounts that post slop will be the “easy” part; the hardest part will be to identify the silent bot accounts that do nothing but upvote.
In my imagination, some sort of referral/voucher system might work. A invites B, B invites C. C turns out to suck. Ban C, discredit B heavily and discredit A lightly. Enough discredit and you get banned or can’t invite more people.
Seems believable. I’m curious, how do lemmy instances protect themselves from ai slop and bots?
Manual labor, the Communist Party of China pays us to keep Lemmy free of bots and revisionists.
Alt text: You guys are getting paid?
Apart from not being that interesting for now, the first line of defence for most is manually-approved sign ups, as far as I can tell.
When the Fediverse grows, I think that weeding out accounts that post slop will be the “easy” part; the hardest part will be to identify the silent bot accounts that do nothing but upvote.
I vaguely remember kbin allowing you to see who upvoted a particular post, so it might not be too difficult.
Tough to differentiate bots that only vote from human lurkers who only vote.
Yeah, you’d need some graph analysis. Bots will all simultaneously upvote certain things, and over time a pattern should emerge.
manual moderation, and there are some moderation bots that can detect spam.
In my imagination, some sort of referral/voucher system might work. A invites B, B invites C. C turns out to suck. Ban C, discredit B heavily and discredit A lightly. Enough discredit and you get banned or can’t invite more people.
They don’t, but they are uninteresting for now