Section 230 doesn’t need to be repealed, it only needs to be amended.
It basically says that online platforms can’t be held liable for the content their users post.
However that was put in place before black box algorithms were put in charge of peoples feeds, and literally hacking our brains to keep us outraged, afraid, and engaged.
It needs to be amended to hold companies liable for content their algorithms recommend to people. It’s one thing to allow people to post whatever they want. That needs to preserved. But if a site "recommends " something that’s harmful, they should be held responsible for that recommendation.
What stops them from using this to destroy the fediverse? Every instance will be liable for every single thing that gets hosted on the server. All they need to do is have a patsy post some illegal content and now the instance can be taken to court.
Yes that’s why repealing is the wrong thing to do.
As I said amend it.
The Fediverse doesn’t have any black box algorithms that recommend content. With the flat repeal of 230 it would be in danger. With my amendment it wouldn’t.
And they aren’t amending it. They’re repealing it. The internet is going to be destroyed, and you’re wishcasting about something irrelevant that isn’t on the table. I think we should probably focus on how absolutely fucking horrible a repeal would be.
What you say sounds good, and this isn’t rhetorical, but who gets to decide what constitutes “harmful” then? Isn’t that still the same problem that could be weaponized against free speech?
Those who are harmed decide. 230 is about protecting companies from law suits filed by users.
The whole “end of free speech” issue comes not so much from the government sensor really (that’s still firmly restricted by the first amendment) but from companies themselves banning any content or accounts that might get them sued.
But if that risk is limited only to what they recommend outside a user’s direct boolean search and filters, they can still host content without concern. But they need to be sure they know and approve exactly what their algorithms are pushing onto people.
The whole “end of free speech” issue comes not so much from the government sensor really (that’s still firmly restricted by the first amendment) but from companies themselves banning any content or accounts that might get them sued.
Really? The early major moves (so stupidly transparent and to reinforce the concern and urgency) was to go after Facebook who agreed to appoint a government representative to their board. Which is unprecedented except in state-controlled entities. Threats have been made and lawsuits filed by Trump personally or his new attack dog the “DOJ” against most major media organizations including those who produce content and/or control distribution and algorithms. Many of the orgs have paid “fines” or tributes to the government in power to remain in favor and altered their content, presentation and/or coverage. This is naked violation of freedom of speech and press.
Back to the point: if enormous and otherwise powerful companies so easily fold–in a matter of months into an administration–there is no “independence” and government censor is hardly theoretical as you would present it, but already in place, and as such puts who defines “dangerous” in an unsustainably temptingly powerful position ripe for future abuse. This is existentially concerning no matter your political stripes as it’s the end of the political experiment that was the US.
Yeah, we need to be careful about distinguishing policy objectives from policy language.
“Hold megacorps responsible for harmful algorithms” is a good policy objective.
How we hold them responsible is an open question. Legal recourse is just one option. And it’s an option that risks collateral damage.
But why are they able to profit from harmful products in the first place? Lack of meaningful competition.
It really all comes back to the enshittification thesis. Unless we force these firms to open themselves up to competition, they have no reason to stop abusing their customers.
“We’ll get sued” gives them a reason. “They’ll switch to a competitor’s service” also gives them a reason, and one they’re more likely to respect — if they see it as a real possibility.
Obviously the way the previous commenter worded it would infringe on the platforms’ free speech, it’s only workable if we replace “harmful” with “illegal” (e.g. libelous).
Ya know, the republicans have been talking about repealing and replacing the ACA since it passed; and we’re still waiting on their version to replace it.
So … yeah… I’m sure that repealing 230 is just the first step… they’ll let us know asap how they’re going to replace, or as you suggest amend it any day now.
Section 230 doesn’t need to be repealed, it only needs to be amended.
It basically says that online platforms can’t be held liable for the content their users post.
However that was put in place before black box algorithms were put in charge of peoples feeds, and literally hacking our brains to keep us outraged, afraid, and engaged.
It needs to be amended to hold companies liable for content their algorithms recommend to people. It’s one thing to allow people to post whatever they want. That needs to preserved. But if a site "recommends " something that’s harmful, they should be held responsible for that recommendation.
What stops them from using this to destroy the fediverse? Every instance will be liable for every single thing that gets hosted on the server. All they need to do is have a patsy post some illegal content and now the instance can be taken to court.
Yes that’s why repealing is the wrong thing to do.
As I said amend it.
The Fediverse doesn’t have any black box algorithms that recommend content. With the flat repeal of 230 it would be in danger. With my amendment it wouldn’t.
You literally didn’t say it was wrong.
And they aren’t amending it. They’re repealing it. The internet is going to be destroyed, and you’re wishcasting about something irrelevant that isn’t on the table. I think we should probably focus on how absolutely fucking horrible a repeal would be.
That’s true, I didn’t use the word wrong, I only implied it.
Sorry for the confusion.
What you say sounds good, and this isn’t rhetorical, but who gets to decide what constitutes “harmful” then? Isn’t that still the same problem that could be weaponized against free speech?
Those who are harmed decide. 230 is about protecting companies from law suits filed by users.
The whole “end of free speech” issue comes not so much from the government sensor really (that’s still firmly restricted by the first amendment) but from companies themselves banning any content or accounts that might get them sued.
But if that risk is limited only to what they recommend outside a user’s direct boolean search and filters, they can still host content without concern. But they need to be sure they know and approve exactly what their algorithms are pushing onto people.
Really? The early major moves (so stupidly transparent and to reinforce the concern and urgency) was to go after Facebook who agreed to appoint a government representative to their board. Which is unprecedented except in state-controlled entities. Threats have been made and lawsuits filed by Trump personally or his new attack dog the “DOJ” against most major media organizations including those who produce content and/or control distribution and algorithms. Many of the orgs have paid “fines” or tributes to the government in power to remain in favor and altered their content, presentation and/or coverage. This is naked violation of freedom of speech and press.
Back to the point: if enormous and otherwise powerful companies so easily fold–in a matter of months into an administration–there is no “independence” and government censor is hardly theoretical as you would present it, but already in place, and as such puts who defines “dangerous” in an unsustainably temptingly powerful position ripe for future abuse. This is existentially concerning no matter your political stripes as it’s the end of the political experiment that was the US.
Yes that’s all true. But it’s a seperate problem that’s happening anyway, 230 or otherwise.
Yeah, we need to be careful about distinguishing policy objectives from policy language.
“Hold megacorps responsible for harmful algorithms” is a good policy objective.
How we hold them responsible is an open question. Legal recourse is just one option. And it’s an option that risks collateral damage.
But why are they able to profit from harmful products in the first place? Lack of meaningful competition.
It really all comes back to the enshittification thesis. Unless we force these firms to open themselves up to competition, they have no reason to stop abusing their customers.
“We’ll get sued” gives them a reason. “They’ll switch to a competitor’s service” also gives them a reason, and one they’re more likely to respect — if they see it as a real possibility.
Obviously the way the previous commenter worded it would infringe on the platforms’ free speech, it’s only workable if we replace “harmful” with “illegal” (e.g. libelous).
Whelp.
Ya know, the republicans have been talking about repealing and replacing the ACA since it passed; and we’re still waiting on their version to replace it.
So … yeah… I’m sure that repealing 230 is just the first step… they’ll let us know asap how they’re going to replace, or as you suggest amend it any day now.
I never mentioned repeal and replace.
As I said, don’t repeal it, amend it.