In response to your second point re: free speech, a cross-post of a comment I made on Facebook on a related issue:
I’m not from the US, but despite knowing the common counter-arguments, I don’t understand how platform censorship is consistent with your 1st amendment.
Technically, the 1st amendment only prevents the government from censoring stuff; in practice, that has IIRC meant that e.g. a recruitment twitch stream by the US military is arguably not allowed to block spam.
And if that isn’t allowed, surely a system where any powerful member of government can pressure any private platform holder to censor arbitrary stuff doesn’t make sense. All you’ve done is to add a level of indirection to the government censorship. Here’s a story by Glenn Greenwald on the issue of platform censorship, and he ultimately resigned from The Intercept because he got censored while trying to report on the same story, too.
If there is not a “state actor,” then the First Amendment does not apply.
I’m not a First-Amendment scholar. There is literature and case law on this subject, but I wouldn’t be able to summarize it well. That said, I’m fairly certain that government officials pressuring private platforms to remove certain content would not implicate the First Amendment. But it is a closer call than the Trump situation.
And, to be clear, I’m not in favor of all forms of platform censorship. I’m simply defending this instance of banning Trump from Twitter.
Without question, this is a hard question. Too many rationalists assume it is easy.
I think sticking to a strictly literal interpretation of the 1st amendment is problematic for the reason that the politically and economically powerful seek, almost by virtue (or vice) of their positions, to always amass more power. Paraphrasing Gilmore’s widely know quote, the powerful interpret power-limiting rules as damage, and route around them. And since full free speech is a strong way to limit the power of the powerful, in all cases in which either laws make it hard or even impossible to censor, or public perception make it politically unfeasible to censor, we may expect those in power to seek means to achieve as much censorship as materially possible through as many indirect means as possible.
Therefore, it’s important to look at this from a Consequentialist perspective and ask whether certain forms of speech are being effectively reduced thanks to coordination between private agents to actively reduce it, and if yes, ask a classic cui bono? If the the answer to this latest question is “those in power”, then for all practical purposes there was censorship, even if it’s a censorship that manages to carefully sidestep the legal definition.
This doesn’t mean that Twitter banning Trump, or all the big tech players banning Parler, is itself wrong. It’s right, but a right that comes from mixing two wrongs, as argued by Matt Stoller, a well known anti-trust researcher who writes extensively on the topic, on his recent article A Simple Thing Biden Can Do to Reset America, from which I quote these two paragraphs (it’s well worth reading the article on its entirety, as well as the one linked in the quote):
My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.
In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.
Unless something like this is done so as untangle the two sides of the problem so that this outcome comes instead of two rights, there will always be the potential for a fully legal, fully 1st-amendment-respecting, “Great Firewall of America” to grow and evolve up to the point free speech will exist de jure, but not de facto. Conversely, if done right, that workaround would be closed, the risk itself ceasing, all the while the promotion of concretely damaging speech still being effectively curbed.
In response to your second point re: free speech, a cross-post of a comment I made on Facebook on a related issue:
If there is not a “state actor,” then the First Amendment does not apply.
I’m not a First-Amendment scholar. There is literature and case law on this subject, but I wouldn’t be able to summarize it well. That said, I’m fairly certain that government officials pressuring private platforms to remove certain content would not implicate the First Amendment. But it is a closer call than the Trump situation.
And, to be clear, I’m not in favor of all forms of platform censorship. I’m simply defending this instance of banning Trump from Twitter.
Without question, this is a hard question. Too many rationalists assume it is easy.
I think sticking to a strictly literal interpretation of the 1st amendment is problematic for the reason that the politically and economically powerful seek, almost by virtue (or vice) of their positions, to always amass more power. Paraphrasing Gilmore’s widely know quote, the powerful interpret power-limiting rules as damage, and route around them. And since full free speech is a strong way to limit the power of the powerful, in all cases in which either laws make it hard or even impossible to censor, or public perception make it politically unfeasible to censor, we may expect those in power to seek means to achieve as much censorship as materially possible through as many indirect means as possible.
Therefore, it’s important to look at this from a Consequentialist perspective and ask whether certain forms of speech are being effectively reduced thanks to coordination between private agents to actively reduce it, and if yes, ask a classic cui bono? If the the answer to this latest question is “those in power”, then for all practical purposes there was censorship, even if it’s a censorship that manages to carefully sidestep the legal definition.
This doesn’t mean that Twitter banning Trump, or all the big tech players banning Parler, is itself wrong. It’s right, but a right that comes from mixing two wrongs, as argued by Matt Stoller, a well known anti-trust researcher who writes extensively on the topic, on his recent article A Simple Thing Biden Can Do to Reset America, from which I quote these two paragraphs (it’s well worth reading the article on its entirety, as well as the one linked in the quote):
Unless something like this is done so as untangle the two sides of the problem so that this outcome comes instead of two rights, there will always be the potential for a fully legal, fully 1st-amendment-respecting, “Great Firewall of America” to grow and evolve up to the point free speech will exist de jure, but not de facto. Conversely, if done right, that workaround would be closed, the risk itself ceasing, all the while the promotion of concretely damaging speech still being effectively curbed.