What you say sounds good, and this isn’t rhetorical, but who gets to decide what constitutes “harmful” then? Isn’t that still the same problem that could be weaponized against free speech?
Those who are harmed decide. 230 is about protecting companies from law suits filed by users.
The whole “end of free speech” issue comes not so much from the government sensor really (that’s still firmly restricted by the first amendment) but from companies themselves banning any content or accounts that might get them sued.
But if that risk is limited only to what they recommend outside a user’s direct boolean search and filters, they can still host content without concern. But they need to be sure they know and approve exactly what their algorithms are pushing onto people.
The whole “end of free speech” issue comes not so much from the government sensor really (that’s still firmly restricted by the first amendment) but from companies themselves banning any content or accounts that might get them sued.
Really? The early major moves (so stupidly transparent and to reinforce the concern and urgency) was to go after Facebook who agreed to appoint a government representative to their board. Which is unprecedented except in state-controlled entities. Threats have been made and lawsuits filed by Trump personally or his new attack dog the “DOJ” against most major media organizations including those who produce content and/or control distribution and algorithms. Many of the orgs have paid “fines” or tributes to the government in power to remain in favor and altered their content, presentation and/or coverage. This is naked violation of freedom of speech and press.
Back to the point: if enormous and otherwise powerful companies so easily fold–in a matter of months into an administration–there is no “independence” and government censor is hardly theoretical as you would present it, but already in place, and as such puts who defines “dangerous” in an unsustainably temptingly powerful position ripe for future abuse. This is existentially concerning no matter your political stripes as it’s the end of the political experiment that was the US.
Yeah, we need to be careful about distinguishing policy objectives from policy language.
“Hold megacorps responsible for harmful algorithms” is a good policy objective.
How we hold them responsible is an open question. Legal recourse is just one option. And it’s an option that risks collateral damage.
But why are they able to profit from harmful products in the first place? Lack of meaningful competition.
It really all comes back to the enshittification thesis. Unless we force these firms to open themselves up to competition, they have no reason to stop abusing their customers.
“We’ll get sued” gives them a reason. “They’ll switch to a competitor’s service” also gives them a reason, and one they’re more likely to respect — if they see it as a real possibility.
Obviously the way the previous commenter worded it would infringe on the platforms’ free speech, it’s only workable if we replace “harmful” with “illegal” (e.g. libelous).
What you say sounds good, and this isn’t rhetorical, but who gets to decide what constitutes “harmful” then? Isn’t that still the same problem that could be weaponized against free speech?
Those who are harmed decide. 230 is about protecting companies from law suits filed by users.
The whole “end of free speech” issue comes not so much from the government sensor really (that’s still firmly restricted by the first amendment) but from companies themselves banning any content or accounts that might get them sued.
But if that risk is limited only to what they recommend outside a user’s direct boolean search and filters, they can still host content without concern. But they need to be sure they know and approve exactly what their algorithms are pushing onto people.
Really? The early major moves (so stupidly transparent and to reinforce the concern and urgency) was to go after Facebook who agreed to appoint a government representative to their board. Which is unprecedented except in state-controlled entities. Threats have been made and lawsuits filed by Trump personally or his new attack dog the “DOJ” against most major media organizations including those who produce content and/or control distribution and algorithms. Many of the orgs have paid “fines” or tributes to the government in power to remain in favor and altered their content, presentation and/or coverage. This is naked violation of freedom of speech and press.
Back to the point: if enormous and otherwise powerful companies so easily fold–in a matter of months into an administration–there is no “independence” and government censor is hardly theoretical as you would present it, but already in place, and as such puts who defines “dangerous” in an unsustainably temptingly powerful position ripe for future abuse. This is existentially concerning no matter your political stripes as it’s the end of the political experiment that was the US.
Yes that’s all true. But it’s a seperate problem that’s happening anyway, 230 or otherwise.
Yeah, we need to be careful about distinguishing policy objectives from policy language.
“Hold megacorps responsible for harmful algorithms” is a good policy objective.
How we hold them responsible is an open question. Legal recourse is just one option. And it’s an option that risks collateral damage.
But why are they able to profit from harmful products in the first place? Lack of meaningful competition.
It really all comes back to the enshittification thesis. Unless we force these firms to open themselves up to competition, they have no reason to stop abusing their customers.
“We’ll get sued” gives them a reason. “They’ll switch to a competitor’s service” also gives them a reason, and one they’re more likely to respect — if they see it as a real possibility.
Obviously the way the previous commenter worded it would infringe on the platforms’ free speech, it’s only workable if we replace “harmful” with “illegal” (e.g. libelous).