• A_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 days ago

    The Court said the host has to :
    1- pre-check posts (i.e. do general monitoring)
    2- know who the posting user is (i.e. no anonymous speech)
    3- try to make sure the posts don’t get copied by third parties (um, like web search engines??)
    Basically, all three of those are effectively impossible.

    in my opinion : #3 effectively seems impossible, #2 is contrary to Lemmy’s philosophy and #1 would require a lot of community supervision … that would require a different Lemmy software.

    • CameronDev@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 days ago

      1 is impossible. You cannot screen every message sent, even screening every post would be a full time 24/7 job (3x full time moderators working in shifts).

      The big platforms will run it through a blackbox content moderator system, and lobby to keep fines minimal. Lemmy would be screwed.

    • misk@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      No new impact, because the ruling is about online marketplaces, which pre-scan / moderate every new posting anyway due to multiple legal obligations that exist already. TechCrunch analysis says that the ruling is too broad and applies out of this context but I don’t see it there and the case they’re making isn’t very solid. I think they recognise this as an attack on how ad industry works because checking if the ad is legal before allowing it would be a huge boon to societies worldwide, but also an enormous cost for companies selling ad-spaces.