• 177 Posts
  • 313 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2024

help-circle
    1. At least on Lemmy, this is definitely what I’ve observed. If you look at any thread that’s full of sturm und drang, it’s usually a tiny handful of accounts that are creating all of it (and then roping other people into their hostility, like a little chain reaction, like Chernobyl.) If you look at the impact, it just looks like everyone’s an asshole, but if you look at the root of the trouble, you realize most people are fine and a tiny minority are noisy and hostile and they can just get everyone else spun up.
    2. I agree, if you’re in NYC right at this moment in history and you can’t see a bigger picture of things worth getting heated up about than White Lotus, you should talk with people in your community more.








  • Grok responded to X users’ questions about public figures by generating foul and violent rape fantasies, including one targeting progressive activist and policy analyst Will Stancil. (Stancil has indicated he may sue X.)

    When you fine-tune a coding AI on code that has deliberate flaws in it, and then switch it back to having conversations in English, it starts praising Hitler and constructing other deliberately hateful content. It wouldn’t surprise me if fine-tuning Grok to be Nazi also led it to “generalize” some additional things that weren’t intended by the operators.










  • I wonder what that indicates about its data set and the general use of image gen

    I think you know.

    On a more serious note, it’s interesting to put in pure nonsense as the prompt (just strings of syllables with no meaning), and see what it comes up with. It likes misshapen heads, which makes sense because it’s trained on a lot of human features, but it also likes houses, fish, and hot air balloons quite a lot for some reason. The images are in my opinion a lot more interesting than a lot of what it comes up with if you give it words.







  • I feel like this is an example of how the core dev team running on an instance that basically just has 3 of the admins do more or less all the moderation for the entire site is not ideal. This type of feature is probably one of the most-requested pain points for most people who run most servers, but my guess is that it’s basically completely invisible to the .ml team why it would even be needed, because their model works fine for them, so why would they.

    Of course they’ve got a right to work or not work on whatever they want, but if their goal is success and good moderation for most servers this type of scalability and teamwork enabling thing is super important.