Recently there’s been quite a bit of outrage because the developer of Piefed publicly called out the Fediverse Anarchist Flotilla (FAF) for supposedly using LLM for automating instance moderation. and even though many of our admins the larger lemmy community took great lengths to debunk that post, it has become the disinfo that keeps on giving (see https://lemmy.dbzer0.com/post/68749575, https://kolektiva.social/@ophiocephalic/116518887925988112, https://lemmy.dbzer0.com/post/68222242 and more)
After clarifying our position for yet another time, someone suggested we should make an official post and an instance policy to “give me something I can boost as a positive example and a sign that things will be better going forward.” and given that this storm-in-a-teacup doesn’t seem to be abating as people are all too happy to bring it up again and again to malign the FAF; We’re making this post to once and for all clarify this situation.
History
We’re not going to rehash the whole drama and the many hit pieces against the FAF in the past two weeks, but I need to post the exact situation as it happened, without speculations and assumptions that people are all too happy to jump to.
- One of our mods develops a tool to download a user’s public posting history through the lemmy API, to be used for evaluating them during moderation and shares it with some people in the admin team as something in progress. This tool does not feed anything to LLMs, it simply downloads the comments locally in a text file for easier review than going via the lemmy GUI.
- Someone is reported to our instance admins for blatant zionism and genocide apologia.
- An admin uses the tool to download the accused person’s comment history for evaluation
- A quick evaluation (without LLM) confirms that this is a person that needs to be instance-banned. The moderation decision has now been locked-in at this point.
- At the same time, that admin was curious to discover if LLMs can used to summarize people’s positions so that people can quickly follow-up with mod actions, without having to evaluate everyone’s posts manually and reduce the workload of admins writing long justifications)
- As an experiment, the admin pass the user’s comment history through a locally-run open-weights LLM (Qwen) to see the summarized output. It happens to match their own decision.
- The admin decides the leave the LLM summary in a pastebin along with that user’s posting history for reference. As an inside joke, they decide to claim the post was summarized by OpenAI, as they expected only our community would care about this and our stance on corporate-LLMs is well-known at this point.
- The admin bans that person, providing a link to that pastebin as justification.
- The admin decides not to continue using LLMs anyway for summaries, for many valid reasons. As evidence see the lack of other pastebins with LLM summaries.
~2 weeks pass…
- The piefed developer is banned by a different mod in our instance for “zionism”. (I put this in quotes as this is one mod’s opinion, and not necessarily our instance’s position.)
- The piefed developer apparently starts going through our instance modlogs for banned zionists and parses all their justifications
- The piefed developer discovers that modlog justification from 2 weeks before with the LLM summary.
- The piefed developer ask quickly in the common lemmy admin channel about it, at which point our instance admin in question, clarifies that the LLM was not used in the decision-making.
- The piefed developer does not officially reach to anyone else from our admin team, despite the fact that we’ve reached out before and asked them to contact us in advance for inter-instance matters to avoid escalations.
- The piefed developer make the public call-out I linked above as a piece of investigative journalism. The piefed developer does not provide the comments from our team which conflict with their narrative. The piefed developer not ask us for an official statement.
- The piefed developer to this day has not amended their public call-out from the comments multiple of our admins and lemmy users leave under their post, conflicting with the narrative.
If you feel I’ve misrepresented any steps of this history, please let us know and I’ll be happy to adjust.
Given that, we acknowledge that even though we didn’t use LLMs in moderations, we allowed it to appear as if we did, and that’s on us. We will of course not do the same mistake again (appear as to be using LLMs for moderation)
The FAF’s stance on LLM moderation
We are aware that our instance is seen as “LLM-friendly” due to our nuanced take on LLMs but that does not mean that we, as an instance, ever considered using LLMs for moderating our instance. So we want to make it absolutely crystal clear how we stand on the matter.
As an official policy:
- We have never used LLMs to guide our moderation decisions. This includes using LLM summaries which we would then validate, as well as LLM summaries which we use to confirm our existing decisions. LLMs are just not in our moderation loop whatsoever.
- We have never passed instance data to corporate LLMs.
- We have not used any automated moderation tooling which utilizes LLMs. The closest we have is the FOSS anti-CSAM filter I’ve developed and shared for years now, which relies strictly on locally-hosted machine-vision models.
- We have never officially considered using LLMs for moderation, nor do we plan to.
- As a team we’re steadfastly against LLM for moderation due to its inherent biases.
- If any of the above changes, we will publicly inform the FAF community.
We hope this can finally put this matter to rest.
This is the third time this month alone that I’ve seen this Piefed clown try to stir shit in various communities. What the fuck is their problem?
They got incredibly salty that a random /0 mod called them a zionist in modlog and they’ve been on a vendetta ever since
As I expected: inconsequential very online bullshit. What a fucking tool.
Excellent clarification
First time hearing this drama. Appreciate the db0 clarifying that LLMs aren’t a apart of mod or admin work.
The Piefed developer can get fucked.
Gotta love the irony of someone who baked proactive censorship of sites and automatic censorship of posters directly into his software complaining about somebody else’s modding decisions…
The biggest issue besides the llm is
a tool to download a user’s public posting history through the lemmy API
I think this needs to be very clearly explained to users in general. In the name of public education.
Your posting history is public, and can and will be used against you. At this point in time, anything posted on the internet is essentially permanent public history. It’s in a database somewhere that can be leaked or scraped.
This is not a lemmy or dbzer0 problem.
Lmao. “At this point” it has been like this since day one. Everypne seems to have forgotten this 90s gem: You can’t delete stuff from the internet
Yes but there has been a cultural and technical shift on the internet. Your average AOL user was not out running witch hunts against people they ideologically oppose, you couldnt mass blast your shitter post to your 5m followers and change public opinion over night, You were not linked by your real name to your employer, your ssn address etc was not leaked 5x to sunday.
The landscape has changed “at this point”.
It is incredible to me that in 2026 people do not understand that a webpage you (anyone) can view without logging in to anything… is public.
I just don’t get it.
How long have people been trying to say ‘The Internet Is Forever’?
… Yeah, apparently there does need to be some actual education effort of some kind as to the fundamental basics of how lemmy / the internet works.
Something else that needs to be part of general user education is that user votes are also public information on Lemmy, Piefed, and the wider fediverse due to federation functionality.
Even your messages are technically public.
There is nothing stopping a bad actor from scraping every thing said on the Fediverse. I’m sure it won’t be long until the CIA is harvesting everything you do here to prosecute people for being antifa or something.
IKTYKTIKTYK the real solution this problem, and its not another attempt at a federated message board 😏
The best way to get full access to fediverse data (including e.g. voting) is to run your own instance. So what instances, do we think, are operated by bad actors, irrespective of their identity or nature❓🤔
Or to get more conspiratorial, a bad actor would need one or more big instances to collect that info without sticking out. So which big instances, do we think, would be willing to give, or sell, a “bad actor” full access❓😉
(This of course assumes that none of the big instances were created by bad actors).In any case, if you’re posting overton wrongthink™, and not doing it (pseudo-)anonymously, then you’re doing it wrong. I mean, you’re in a network/platform that goes as far as encouraging you to use multiple accounts, so what’s stopping you from doing it right?!
I mean, you’re in a network/platform that goes as far as encouraging you to use multiple accounts, so what’s stopping you from doing it right?!
easy access to weapons grade software like llms that can deanonymize users across multiple posts across the entire web
LLM’s and stylometry are not magic.
i write using different styles constantly.
And It’s not like people are writing original long essays here all the time, for the weapons grade tools to assay anything with high confidence.
The CIA is for external affairs. Pretty sure they would use the FBI for that.
The CIA is for external affairs.
When has that ever stopped them?
You realize most of the instances aren’t in the US, right?
I left lemmy because I didn’t like the devs among other reasons now the piefed devs are being weirdos too.
There’s always mbin. The devs of that don’t seem crazy.
Does it have instance blocking that includes users?
I’m not too sure. It is a bit more feature bare compared to lemmy/piefed.
Ironically the piefed devs are everything people said the Lemmy devs are. People were saying that Lemmy’s developers were going to push opinionated and dangerous changes to the software, and piefed has done all that and more. Their lead developer is more opinionated than dessalines and nutomic combined.
Yeah, I really dislike the head Lemmy dev, Dessaline’s, political beliefs and the way they seem to run some of the communities they mod on .ml. They don’t always seem to understand certain suggestions for usability improvements, and there have been times where development has been superbly slow on important features.
I truly think a big part of their motivation in making the software was to be able to create a place where their own controversial opinions couldn’t be effectively censored while they maintained complete control over their own instance. I wish I could assume better motivations. But it’s admittedly complete speculation.
All of that said, I’ve not seen them put any of their political beliefs into the software itself, and that is only earning more of my respect as time goes on.
I had a twinge of concern when Rimu implemented a really basic filter for trying to detect screenshots from 4chan, which would block them from being uploaded with an intentionally misleading error message. Should have followed my gut feeling.
There’s no reason for that to be a direct platform level feature instead of there being a base of functionality for custom image handling functionality. Compression with stuff like imagemagick, hash check to prevent duplicate uploads, and “image feature” detection and handling code with that specific bugbear as the example “extension”/“detector”/etc.
Not as a hardcoded feature. Even just from a design standpoint that’s not a great choice.
I truly think a big part of their motivation in making the software was to be able to create a place where their own controversial opinions couldn’t be effectively censored while they maintained complete control over their own instance. I wish I could assume better motivations. But it’s admittedly complete speculation.
You don’t need to speculate when it’s public knowledge. Reddit censored MLs so MLs made their own Reddit. They created Lemmy principally so that lemmygrad.ml could exist[1], and secondarily so that anyone could do the same and the various instances could federate with each other—or not. That’s why they intentionally don’t bake politics into the software. They knew from the start that—if Lemmy were to survive at all—their own instance(s) would become less popular than liberal ones.
Nerds + Drama now and forever :(
It really gets exhausting.
You can’t spell nerd without starting an argument as to whether or not you’ve partially spelled neuroticism, or merely used that as an overly complex premise for a joke.
🤣 take note future mudslingers: anarchists keep their receipts.
Gonna hate me for this but llms like gemma are great for classification tasks and moderation by llm might be less biased than a human one never knows. But at the moment that doesn’t seem to be the case and still
Yes, i personally use a 3b model for classification
I feel there is a low common denominator of trolling and hostility where automatic AI powered moderation with no human intervention is fully justified and can’t really be argued against. Like those brand new troll accounts that have been hammering ours and other instances.
Any ai use without a human in loop is dangerous in someway it is just a glorified next word predictor idsa has had me memorize this one
I don’t agree. You can give people the benefit of the doubt when they’re someone who’s had a bad day but has a history of being a normal person.
It is a waste of human resources to manually check and consider whether to ban 20 minute old accounts spouting slurs, insults, rape threats, and doxxing.
Those are ones who should be automatically banned with no further consideration given. Most of these dipshits are probably AI or bots themselves. You should use blind automated tools to ban them. There is no legitimate 20 minute old account coming into communities and attacking or starting fights with people. Those are trolls and there is zero ambiguity or abstractness there.
Any ai use without a human in loop is dangerous in someway
not in the situation of moderation of private forums where the danger is ‘oh no… undo’ esp if you flag it as such.
It might be more reputational damage and models do have quirks(goblins gpt5) and biases so do people you might not want to compound them and priming is a thing known to mess with human judgment
you’re confusing shitty outputs with actual impact. shrug i dont particularly care if llms are used for modding or not. but there are many ways to use them effectively with no downsides. I certainly dont trust their output but they can certainly speed up tasks.
for example you can use them to bring up problematic comments from the users history for review. which in no way interferes with human judgement, but does save time. at worst they’ll miss stuff (which a human is likely to do anyways when digging through an entire comment history). not a huge deal since we have fall backs like the report flag etc.
actual impact is reduce strain on moderators, easily revertible decisions for/against users. there is very little at stake here. people who do give a shit that an ‘llm flagged your comment on a public chat forum’ are overly sensitive. Now if you switch the domain like to job resumes its very different.
deleted by creator
No I think using it against brand new troll accounts has essentially no reasonable possibility of false positives. We’re talking about the people who make rape threats, spew nothing but abusive insults and slurs, and spam porn, gore, or offensive memes. Usually in the first ~40 minutes of account creation. So forgive me for not wanting to give them the benefit of the doubt.
@jatone@lemmy.dbzer0.com I feel like using AI to ban the most offensive of troll accounts is not only justified but would benefit the community. You can have manual review for older accounts, but accounts under some low age and post history threshold should get automatic bans if they troll or abuse others. They’re not worth putting huge amounts of human effort into. Plus it can allow them to survive and do damage way longer if people need to manually ban them.
LLM as a technology isn’t inherently bad. All of the problems with it are related to the fact rich sociopaths are (or are allowing it) to be implemented, used, and misused in evil ways.
E.g. an ethically trained local model implemented in a way to assist a mod team really isn’t a problem.
The piefed developer can get fucked. This one person has caused nothing but chaos in the Lemmy community. I’ve seen similar posts to this one, having to do with different topics, where admins and mods have to put out some stupid statement to counter some dogshit decision or opinion from this person.
A few examples:
Unfortunately I think Rimu is just going to keep posting this bullshit and drown out the truth. Its been pretty clear to me he’s been a wrecker for a while.
The fear is that Rimu and the liberal instances can control the narrative by blocking/hiding the leftist instances and users. The casual users on the liberal instances might not even see the criticism
snicker libs really digging deep to try and trash FAF.
Finally, this whole drama was nonsense
rimu is butt hurt his BFFs at world keep getting called out for their shitty modding and this was the best he could do. =p
That’s why it’s so important to check your own biases, be rational and skeptical. I mean the people reading Rimu’s post.
It’s incredibly easy to be convinced by some manifactured argument; if you want, you can present almost any kind of evidence both as supporting a conclusion or disproving it, just depending on how you word it or what you leave out.
Everyone has an agenda. Including this post :D
Who is the FAF?
Freely Analyzing Frankfurters
It’s the fediverse anarchist flotilla. lemmy.dbzer0.com and anarchist.nexus, with quokk.au as an ally, as far as I understand it. Small collective of allied fediverse instances.
AI-haters are making themselves irrelevant so why care. Using local LLMs for various tasks they’re well suited for is just being smart.
The problem is stupid people think the programs are alive and sentient, when in reality they don’t ‘understand’ anything, they just compute the most likely next word. I’d rather not live a world dominated by that mentality, it lacks care.
Everything we know about how human brains work shows that we’re also just pattern matching machines in a loop.
It’s always a good idea to study a subject before calling others stupid.
Human brains do not play by the same fundamental rules as software and hardware do.
They can be reasonably simplified down to the same rules, in a fair number of contexts and scopes… but they’re very far from ‘the same thing’.
Unless you’re talking about simulating a brain down to the atom inside a computer, or literally growing/fabricating biological machine hybrids… no, “AI”, specifically LLMs do not well and wholly act like human brains.
Both of those are current avenues of research into AI… they have been for decades… but LLMs really are just fancy autocompletes, an enormous mess of matrix comparisons, basically.
Human brains, biological brains have many more layers of processes going on than just looping pattern machines, they are far more complex and have many entirely distinct functional mechanisms at play.
Consider Phineas Gage.
Shoot a railroad spike through your local PC running an LLM, tell me how the LLM ‘performs’ after that.
What is the LLM equivalent of neuroplasticity?
Epigenetics?
All the actual mechanisms that things like psychiatric drugs operate by?
I recommend Susan Blackmore’s “Consciousness: An Introduction”, and of course Douglas Hofstadter’s “Gödel, Escher and Bach” and the followup “I am a strange Loop”.
I didn’t say human brains function like LLMs. I said that everything we know about how human brains work indicates we’re also just pattern matching machines in a loop.
The point is that the fact that LLMs are “next token predictors” doesn’t in itself say anything about what the emergent effects of that can be.
I am pretty sure I read Godel Escher and Bach when I was in college, almost 20 years ago now.
That is what I am referring to when I say ‘in some contexts and scopes, it is a reasonable simplification’.
LLMs are pattern matching machines, that operate via code that involves many different kinds of loops.
Brains appear to act in that manner as well, but my point is that they do many, many other things and that is because they are fundamentally different kinds of ‘machines’ that have fundamentally different ‘operating principles’, many of which we are still figuring out, quite likely many of which we are currently not even aware of or barely understand at all.
And, actually, that LLMs are ‘next token predictors’ does say a lot about their internally emergent properties… that is to say, the lack of them, their bottlenecks.
LLMs are not going to be able to progress into being AGI, because there are many things Brains can do that LLMs cannot… for one, forgetting things, unlearning false things.
An LLM cannot modify its own training data. It cannot modify its own conceptual association scores.
Humans can do this. They can realize some fundamental notion they have is actually significantly wrong, incorrect, and what happens is the brain literally physically restructures itself when that occurs.
Unless or untill the actual fundamental idea of what an LLM is, is significantly altered or augmented… they’re going to keep running into the diminishing returns problem that they currently area.
Brains are much more capable, diverse and complex than LLMs.
Yes, as I’ve described here: https://blog.troed.se/posts/the-delta-between-an-llm-and-consciousness/
I didn't say human brains function like LLMsToday’s LLMs are based on a Google research paper from 2017. Another published paper that would solve this was published by Google in december last year: https://aipapersacademy.com/nested-learning-hope/
Look I don’t care about your blog, I care about the comment you made here, in this thread.
You are not important enough for you to assign me required reading, muchless presume I am ‘familiar with your previous works’.
You are another random username and profile pic that is saying some words.
You should have the contextual awareness to realize that a lot of people on basically a public internet discussion forum have a lot of varying degrees of knowledge <-> misinformation regarding LLMs, and that when you make a single sentence statement that yes, does not precisely say that Brains only do what LLMs do… a whole bunch of other people are going to read it as such.
Thus, I provided that additional context and nuance here, in this thread, so that anyone who stumbles upon this thread can read this thread and be better informed, here.
Could you find any body of neuroscientists that endorses that claim?
Yes, the whole field.
Lol. Citation definitely needed.
Find one conference group endorsing this then. Because I think you’re making this up, I think you have basically zero familiarity with research into the underlying mechanisms of human cognition and neurobiology.
I don’t really care much for what you think - I already sourced two well known experts on the subject in another post in this thread.
aka you asked an llm and it gave you a biased answer?
And yet from neither can you provide a citation of “human cognition is just pattern matching in a loop”
I’m not saying they don’t have any purpose, but it might be a good idea to question whether you would like a cold calculating machine to interact with, instead of something made with human care.
I think perhaps this world has dehumanized everyone so much that they would prefer interaction with cold sycophants instead of a meat problem
LLMs are tools. Like … compilers. Your post comes off very strange with that in mind.
I agree, the problem isn’t the people who approach them like tools, it’s the people who approach them like people, or confidants, or gods. I’m aware that isn’t everyone, but some append consciousness to something akin to advanced autocorrect like their phone has.
We can merry-go-round the philosophy of whether humans are tools, or consciousness is an algorithm, but it is missing the point I fear, that this stuff doesn’t comprehend, it says it does, it’s a simulacrum of understanding, wrapped up in humanlike speech, not something that cares about anything
I’ve never really cared much if my C-compiler “understands” assembler - just that it produces good results ;)
I used a local LLM yesterday to reverse engineer Winbond’s NAND ECC algorithm*. That wouldn’t be possible with any other tool since the LLM spent the time “reasoning” around algorithms. I don’t really care much about the definition of “reasoning” - just that the job got done.
I feel the AI haters try very hard to claim that the LLMs can’t do anything new. That just … isn’t so. LLMs are a new kind of tool and they have plenty of viable uses.
Eh, I don’t use it much, but when looking for answers to questions that don’t seem to be answered in old forum posts, it’s been pretty helpful in sorting out tech issues.
And I think what they mean by ‘can’t do anything new’, is that it is built from what already exists, but that’s true of all tech really. I think the bigger issue isn’t that it’s not creative, but that at a certain point, what it will be building off is its own output. Humans have the same issue in reproduction, if we only inbreed, we cause issues in our schematics, as it were.
And here I’ve done it to myself, attributing living elements to a tool. I try to be mindful of these things, but just imagine how many aren’t.
Concerns can be debated, lashing out is for toddlers
lmao, AI users are so funny
Ai is cool I just hate llms and people saying they can do everything that’s not even happening here it just sounds like a zio saying shit they do that even worse than llms except llms aren’t as enthusiastic about rape and sometimes say true things just by chance which Zionists are and don’t
breathe man
Using local LLMs for various tasks
But is the data local?
AIbros always swear that it’s just one more datapoint bro, just one more datasource, just the one touch to the model. But they won’t ever pay the dues to the people they extract from.
People want to pretend as if data being used for LLM training is somehow the exception to the rule, when the reality is that the modern internet is built on data aggregation.
If you want to protect your publicly available social media activity from being used by any for-profit third party, then you might as well just start a private message board with your local friends that isn’t advertised on the public web. Everything you do on the fediverse is logged and shared across hundreds - if not thousands- of servers, including private ones that are aggregating data for profit. That’s kind of the entire point of it.
I’m honestly not even sure what to tell someone who takes issue with open sourced LLM’s trained on publicly available data on the open web. Like - fuck Wikipedia too, I guess?
A local LLM is local… it is capable of running on the hardware that you have, without any internet connection.
I have been running a local LLM on my Steam Deck for about a year now, within a sandboxed environment.
If you have the LLM on your PC, run it locally, feed it a bunch of data you also have locally… its basically like using a text editor to open a text file that is on your computer, or an offline single player game loading up a local save file.
AI Bros tend to be the ones massively jamming through networked, non local LLMs, that live in server databases somewhere, that you pay them per the amount of input you send to them, output they send to you.
A local LLM, well you can download them for free, and then turn off the internet, and use them, if you know how.
… I can literally run my Steam Deck LLM anywhere, and I can keep it powered indedinitely with a solar panel and battery/transformer, like you see typically used as a kind of emergency back up or camping power solution.
There are differences between the actual tech of LLMs themselves, and how people with much more money than sense are currently reshaping society with them.
You can go the AI Bro route and turn them into subscription services… or you can have the entire thing within your home and totally under your own control, your own functional ownership.


















