Recently there’s been quite a bit of outrage because the developer of Piefed publicly called out the Fediverse Anarchist Flotilla (FAF) for supposedly using LLM for automating instance moderation. and even though many of our admins the larger lemmy community took great lengths to debunk that post, it has become the disinfo that keeps on giving (see https://lemmy.dbzer0.com/post/68749575, https://kolektiva.social/@ophiocephalic/116518887925988112, https://lemmy.dbzer0.com/post/68222242 and more)
After clarifying our position for yet another time, someone suggested we should make an official post and an instance policy to “give me something I can boost as a positive example and a sign that things will be better going forward.” and given that this storm-in-a-teacup doesn’t seem to be abating as people are all too happy to bring it up again and again to malign the FAF; We’re making this post to once and for all clarify this situation.
History
We’re not going to rehash the whole drama and the many hit pieces against the FAF in the past two weeks, but I need to post the exact situation as it happened, without speculations and assumptions that people are all too happy to jump to.
- One of our mods develops a tool to download a user’s public posting history through the lemmy API, to be used for evaluating them during moderation and shares it with some people in the admin team as something in progress. This tool does not feed anything to LLMs, it simply downloads the comments locally in a text file for easier review than going via the lemmy GUI.
- Someone is reported to our instance admins for blatant zionism and genocide apologia.
- An admin uses the tool to download the accused person’s comment history for evaluation
- A quick evaluation (without LLM) confirms that this is a person that needs to be instance-banned. The moderation decision has now been locked-in at this point.
- At the same time, that admin was curious to discover if LLMs can used to summarize people’s positions so that people can quickly follow-up with mod actions, without having to evaluate everyone’s posts manually and reduce the workload of admins writing long justifications)
- As an experiment, the admin pass the user’s comment history through a locally-run open-weights LLM (Qwen) to see the summarized output. It happens to match their own decision.
- The admin decides the leave the LLM summary in a pastebin along with that user’s posting history for reference. As an inside joke, they decide to claim the post was summarized by OpenAI, as they expected only our community would care about this and our stance on corporate-LLMs is well-known at this point.
- The admin bans that person, providing a link to that pastebin as justification.
- The admin decides not to continue using LLMs anyway for summaries, for many valid reasons. As evidence see the lack of other pastebins with LLM summaries.
~2 weeks pass…
- The piefed developer is banned by a different mod in our instance for “zionism”. (I put this in quotes as this is one mod’s opinion, and not necessarily our instance’s position.)
- The piefed developer apparently starts going through our instance modlogs for banned zionists and parses all their justifications
- The piefed developer discovers that modlog justification from 2 weeks before with the LLM summary.
- The piefed developer ask quickly in the common lemmy admin channel about it, at which point our instance admin in question, clarifies that the LLM was not used in the decision-making.
- The piefed developer does not officially reach to anyone else from our admin team, despite the fact that we’ve reached out before and asked them to contact us in advance for inter-instance matters to avoid escalations.
- The piefed developer make the public call-out I linked above as a piece of investigative journalism. The piefed developer does not provide the comments from our team which conflict with their narrative. The piefed developer not ask us for an official statement.
- The piefed developer to this day has not amended their public call-out from the comments multiple of our admins and lemmy users leave under their post, conflicting with the narrative.
If you feel I’ve misrepresented any steps of this history, please let us know and I’ll be happy to adjust.
Given that, we acknowledge that even though we didn’t use LLMs in moderations, we allowed it to appear as if we did, and that’s on us. We will of course not do the same mistake again (appear as to be using LLMs for moderation)
The FAF’s stance on LLM moderation
We are aware that our instance is seen as “LLM-friendly” due to our nuanced take on LLMs but that does not mean that we, as an instance, ever considered using LLMs for moderating our instance. So we want to make it absolutely crystal clear how we stand on the matter.
As an official policy:
- We have never used LLMs to guide our moderation decisions. This includes using LLM summaries which we would then validate, as well as LLM summaries which we use to confirm our existing decisions. LLMs are just not in our moderation loop whatsoever.
- We have never passed instance data to corporate LLMs.
- We have not used any automated moderation tooling which utilizes LLMs. The closest we have is the FOSS anti-CSAM filter I’ve developed and shared for years now, which relies strictly on locally-hosted machine-vision models.
- We have never officially considered using LLMs for moderation, nor do we plan to.
- As a team we’re steadfastly against LLM for moderation due to its inherent biases.
- If any of the above changes, we will publicly inform the FAF community.
We hope this can finally put this matter to rest.


Everything we know about how human brains work shows that we’re also just pattern matching machines in a loop.
It’s always a good idea to study a subject before calling others stupid.
Could you find any body of neuroscientists that endorses that claim?
Yes, the whole field.
Lol. Citation definitely needed.
Find one conference group endorsing this then. Because I think you’re making this up, I think you have basically zero familiarity with research into the underlying mechanisms of human cognition and neurobiology.
I don’t really care much for what you think - I already sourced two well known experts on the subject in another post in this thread.
aka you asked an llm and it gave you a biased answer?
And yet from neither can you provide a citation of “human cognition is just pattern matching in a loop”
The difference between you and me is that I’ve studied the subject. You have not. It’s not on me to teach you the contents of the literature.
Go be annoying somewhere else.
Nah if you are making a claim you should provide sources.
Which I’ve done, and not a single person has looked them up. The reason for that is that no one here is actually interested in the subject - they just cannot accept their feels about humans being special snowflakes not having any support in the science.
Did you really just drop “I’ve done my research” here?! Lol! Bet you’re an immunologist, too. And a lawyer on top.
You lost the bet - where do I get my payout?
Your cognitive bias is known as “Out-Group Homogeneity Bias”. Enjoy.
https://dictionary.apa.org/outgroup-homogeneity-bias
You do realise saying “I’ve studied the subject” has no credibility behind it whatsoever?
If you’ve truly studied the subject you’d be able to explain your rational, not lash out against people asking for you to clarify.
As far as this thread is concerned you’ve read a book once and it made you an armchair expert. Probably the worst pseudo-intellectualism I’ve seen on here for a long while.
I don’t care. See how easy it is? Either you’re interested in the subject and you would already know that what I wrote is completely uncontroversial, or you spend time making ignorant posts because a simple fact disagrees with your feels.
If you had studied you would know that when you make extremely strong claims you must back them up with more evidence than “here is a book”. The typical fashion is to provide a quotation, or chapter/page reference making it easy to demonstrate that you’re not talking out of your arse.
Of course no serious person actually thinks human brains are “just pattern matchers in a loop” because that statement is silly, it’s not even clear what that would mean. So of course you can’t cite someone saying that.
Read more. Post less.
Human brains do not play by the same fundamental rules as software and hardware do.
They can be reasonably simplified down to the same rules, in a fair number of contexts and scopes… but they’re very far from ‘the same thing’.
Unless you’re talking about simulating a brain down to the atom inside a computer, or literally growing/fabricating biological machine hybrids… no, “AI”, specifically LLMs do not well and wholly act like human brains.
Both of those are current avenues of research into AI… they have been for decades… but LLMs really are just fancy autocompletes, an enormous mess of matrix comparisons, basically.
Human brains, biological brains have many more layers of processes going on than just looping pattern machines, they are far more complex and have many entirely distinct functional mechanisms at play.
Consider Phineas Gage.
Shoot a railroad spike through your local PC running an LLM, tell me how the LLM ‘performs’ after that.
What is the LLM equivalent of neuroplasticity?
Epigenetics?
All the actual mechanisms that things like psychiatric drugs operate by?
Believe it or not this does actually exist in a sense. There is such a thing as model surgery where parts of models are removed, bits from multiple models recombined, or model layers duplicated. Sometimes this is used to make an LLM with more performance or less resource usage. Models can then be “healed” by continued training so they behave correctly after surgery.
If you want a hardware and systems level example look no further than data center level redundancy and network routing systems that adapt from failures.
Epigenetics I am not sure would have an equivalent given we are not talking about biological creatures.
Have you never heard of AI drugs using adversarial examples or activation engineering?
Activation engineering has been used in studies like this to manipulate emotion concepts in LLMs: https://www.anthropic.com/research/emotion-concepts-function Here it’s used for steering LLMs: https://arxiv.org/abs/2308.10248
It’s also used to uncensor LLMs.
I am not going to sit here and get into an argument around if LLMs are or aren’t sentient because we don’t know enough about consciousness to make that determination. We are still a long way away from solving the philosophical hard problem of consciousness. We don’t really know what parts of animals or humans are necessary components of sentience or are just implementation specific details made by evolution. I also think that you are judging an LLM based on poor understanding of how they work and the ecosystem built up around them.
I recommend Susan Blackmore’s “Consciousness: An Introduction”, and of course Douglas Hofstadter’s “Gödel, Escher and Bach” and the followup “I am a strange Loop”.
I didn’t say human brains function like LLMs. I said that everything we know about how human brains work indicates we’re also just pattern matching machines in a loop.
The point is that the fact that LLMs are “next token predictors” doesn’t in itself say anything about what the emergent effects of that can be.
I am pretty sure I read Godel Escher and Bach when I was in college, almost 20 years ago now.
That is what I am referring to when I say ‘in some contexts and scopes, it is a reasonable simplification’.
LLMs are pattern matching machines, that operate via code that involves many different kinds of loops.
Brains appear to act in that manner as well, but my point is that they do many, many other things and that is because they are fundamentally different kinds of ‘machines’ that have fundamentally different ‘operating principles’, many of which we are still figuring out, quite likely many of which we are currently not even aware of or barely understand at all.
And, actually, that LLMs are ‘next token predictors’ does say a lot about their internally emergent properties… that is to say, the lack of them, their bottlenecks.
LLMs are not going to be able to progress into being AGI, because there are many things Brains can do that LLMs cannot… for one, forgetting things, unlearning false things.
An LLM cannot modify its own training data. It cannot modify its own conceptual association scores.
Humans can do this. They can realize some fundamental notion they have is actually significantly wrong, incorrect, and what happens is the brain literally physically restructures itself when that occurs.
Unless or untill the actual fundamental idea of what an LLM is, is significantly altered or augmented… they’re going to keep running into the diminishing returns problem that they currently area.
Brains are much more capable, diverse and complex than LLMs.
Catastrophic forgetting is literally one of the main problems with current models that ML research is currently trying to overcome. But whatever.
Yes, as I’ve described here: https://blog.troed.se/posts/the-delta-between-an-llm-and-consciousness/
I didn't say human brains function like LLMsToday’s LLMs are based on a Google research paper from 2017. Another published paper that would solve this was published by Google in december last year: https://aipapersacademy.com/nested-learning-hope/
Look I don’t care about your blog, I care about the comment you made here, in this thread.
You are not important enough for you to assign me required reading, muchless presume I am ‘familiar with your previous works’.
You are another random username and profile pic that is saying some words.
You should have the contextual awareness to realize that a lot of people on basically a public internet discussion forum have a lot of varying degrees of knowledge <-> misinformation regarding LLMs, and that when you make a single sentence statement that yes, does not precisely say that Brains only do what LLMs do… a whole bunch of other people are going to read it as such.
Thus, I provided that additional context and nuance here, in this thread, so that anyone who stumbles upon this thread can read this thread and be better informed, here.
Oh I haven’t seen a single person replying so far who has shown any interest in being “better informed”.
Howabout everyone upvoting the additional context I am providing, being taken to signal that they support being more fully informed, here, in this thread?
With this ‘reddit-esque’ style of a discussion forum, you can expect that something like 90% of people are lurkers who mostly interact via up and down votes.
Beyond that, a number of people elsewhere in this thread are… having fairly extensive discussions, about what actually constitutes being ‘better informed’ irt LLMs.
I’m not saying they don’t have any purpose, but it might be a good idea to question whether you would like a cold calculating machine to interact with, instead of something made with human care.
I think perhaps this world has dehumanized everyone so much that they would prefer interaction with cold sycophants instead of a meat problem
LLMs are tools. Like … compilers. Your post comes off very strange with that in mind.
I agree, the problem isn’t the people who approach them like tools, it’s the people who approach them like people, or confidants, or gods. I’m aware that isn’t everyone, but some append consciousness to something akin to advanced autocorrect like their phone has.
We can merry-go-round the philosophy of whether humans are tools, or consciousness is an algorithm, but it is missing the point I fear, that this stuff doesn’t comprehend, it says it does, it’s a simulacrum of understanding, wrapped up in humanlike speech, not something that cares about anything
I’ve never really cared much if my C-compiler “understands” assembler - just that it produces good results ;)
I used a local LLM yesterday to reverse engineer Winbond’s NAND ECC algorithm*. That wouldn’t be possible with any other tool since the LLM spent the time “reasoning” around algorithms. I don’t really care much about the definition of “reasoning” - just that the job got done.
I feel the AI haters try very hard to claim that the LLMs can’t do anything new. That just … isn’t so. LLMs are a new kind of tool and they have plenty of viable uses.
*) https://blog.troed.se/posts/winbond_nand_ecc/
Eh, I don’t use it much, but when looking for answers to questions that don’t seem to be answered in old forum posts, it’s been pretty helpful in sorting out tech issues.
And I think what they mean by ‘can’t do anything new’, is that it is built from what already exists, but that’s true of all tech really. I think the bigger issue isn’t that it’s not creative, but that at a certain point, what it will be building off is its own output. Humans have the same issue in reproduction, if we only inbreed, we cause issues in our schematics, as it were.
And here I’ve done it to myself, attributing living elements to a tool. I try to be mindful of these things, but just imagine how many aren’t.