• chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Cory misrepresents the arguments (well basically hides them) in order to not have to face any material criticism and turns them into “you just don’t like these people” which frames the criticism as emotional and not rational.

    Well, isn’t it? IMO some of the most popular arguments against generative AI are founded in misinformation and/or don’t have strong rational defenses, even though there are better arguments further down the list, but that doesn’t stop anyone because in the court of public opinion rhetoric and emotion trumps rational argument. This becomes really obvious when you try to publicly confront the failings of these arguments, in most cases the discussion will devolve into personal attacks immediately as people interpret argument as support of the enemy. If Cory Doctorow is being a little weaselly and failing to lay out an ethical position with full forthrightness, that’s probably because doing so is a recipe for making everyone angry at you, regardless of your views, and since he’s a public figure attempting to persuade people who mostly agree with him anyway and aren’t trying to think too hard it would be a stupid choice.

    It would be great if we had a culture where it was safe to lay out and consider ethical ideas on their merits without holding anything back, but we just don’t, and it’s unfair to demand that from people who have to answer to the public because every single one of them would get crushed if they actually did it.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      23 hours ago

      There’s a gradient from legitimate damning criticism of corporate behavior, to vaguely defensible moralizing against the technology itself, to identarian chest-beating as ingroup performance. The latter mindset is most common by far and freely borrows from the other two.

      I’ve been legitimately surprised when makers on Youtube casually or comedically admit they vibe-coded their latest gizmo. It must invite harassment. Like okay, you built an animatronic mousepad that makes an aimbot out of your actual hand, but there’s one drop of AI in there, so it’s slop. And if anyone elaborated an ethical justification for using a program that does a thing, that would be ‘protesting too much.’ As if trying to pre-empt any tired haranguing is just conscience of guilt.

      The nature of bad faith is that there is no right answer. The podcast Final v3 rightly mocks Coca-Cola for thinking their text-to-video ad deserves a “behind the scenes” featurette, but when Julian mentions efforts to create models from original licensed input, he immediately scoffs “like that’d be ‘ethical.’” My guy, if a solution isn’t relevant, the complaint is pretense.