Thinking about freedom of speech and the latest “purges” on social networks, my thoughts are like this: I prefer freedom of speech even for people like homeopaths and anti-vaxers, not because I consider their opinions to be inherently valuable, but because a decision algorithm that would ban them, would probably also have banned Ignác Semmelweis two centuries ago.
Then I thought again and realized I don’t actually need such an old example. A decision algorithm that would today ban people who say “COVID-19 is just a flu” would have one year ago banned people who advised wearing face masks, wouldn’t it?
You make claims about decision algorithms in general which:
a) only apply to a specific decision algorithm (such as “Rational”Wiki’s, to the extent that there is such a thing) or
b) only apply to a class of decision algorithms (‘trust authority’ + [some definition of authority]).
“Decision algorithm” was here my metaphor for humans. Like a rulebook for human censors, who themselves are smart and educated, although not extremely.
In a hypothetical situation where a government would officially appoint censors, or if the social networks would decide to optimize for something other than low costs. In a hypothetical situation, where this would be relatively high-status job… I mean, you’d have the power to shape the public discourse, and that is no small thing; give it a decent salary and many university-educated people would compete for it. But you wouldn’t hire the smartest ones, because they have a better use of their skills; you wouldn’t want to hire people with contrarian ideas; and to censor the overwhelming amounts of text online, you would need to hire lot of people, so you couldn’t afford to be too picky even if your budget was unlimited.
The average censor, if such job existed today, would realistically be some bored bureaucrat, who doesn’t give a fuck about ideas, and just applies rules mechanically, trying to cover his ass. But an idealized censor would be a smart and educated person, passionately opposing pseudoscience… kinda like I imagine the people who write on RationalWiki, except maybe less politically mindkilled. And with a button that would allow them to delete content anywhere online. And I wondered what would happen as a consequence.
You make claims about decision algorithms in general which:
a) only apply to a specific decision algorithm (such as “Rational”Wiki’s, to the extent that there is such a thing) or
b) only apply to a class of decision algorithms (‘trust authority’ + [some definition of authority]).
“Decision algorithm” was here my metaphor for humans. Like a rulebook for human censors, who themselves are smart and educated, although not extremely.
In a hypothetical situation where a government would officially appoint censors, or if the social networks would decide to optimize for something other than low costs. In a hypothetical situation, where this would be relatively high-status job… I mean, you’d have the power to shape the public discourse, and that is no small thing; give it a decent salary and many university-educated people would compete for it. But you wouldn’t hire the smartest ones, because they have a better use of their skills; you wouldn’t want to hire people with contrarian ideas; and to censor the overwhelming amounts of text online, you would need to hire lot of people, so you couldn’t afford to be too picky even if your budget was unlimited.
The average censor, if such job existed today, would realistically be some bored bureaucrat, who doesn’t give a fuck about ideas, and just applies rules mechanically, trying to cover his ass. But an idealized censor would be a smart and educated person, passionately opposing pseudoscience… kinda like I imagine the people who write on RationalWiki, except maybe less politically mindkilled. And with a button that would allow them to delete content anywhere online. And I wondered what would happen as a consequence.