I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.
As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.
The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all
once, and then run it as a daemon and leave it running.
Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.
Let me know if you have any issue or improvements.
EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!
Why? Use something like RAFT, elect the leader, have the leader run the AI tool, then exchange results, with each node running it’s own subset of image hashes.
That does mean you need a trust system, though.
As I’m saying, I don’t think you need to: manually subscribing to each trusted instance via ActivityPub should suffice. The pass/fail determination can be done when querying for known images.
Yeah that works. Who is the leader and how does it change? Does Lemmy.World take over because it’s largest?
Hash the image, then assign hash ranges to servers that are part of the ring. You’d use RAFT to get consensus about who is responsible for which ranges. I’m largely just envisioning the Scylla gossip replacement as the underlying communications protocol.