All four examples involve threats—one party threatening to punish another unless the other party obeys some rule—but the last threat (threatening to increase existential risk contingent on acts of forum moderation) sticks out as different from the others in several ways:
Proportionality. The punishment in the other examples seems roughly proportional to the offense ($500 may seem a bit high for one album, but is in the ballpark given the low chance of being caught), but over 6,000 deaths (in expectation) plus preventing who-knows-how-many people from ever living is disproportionate to the offense of deleting forum comments.
Narrow targeting. Most of the punishments are narrowly targeted at the offender—the offender is the one who suffers the negative consequences of the punishment, as much as possible (although there are some broader consequences—e.g., the rest of the forum is deprived of a banned poster’s comments). But the existential risk threat is not targeted at all—it’s aimed at the whole world. Threats to third parties are usually frowned upon—think of hostage taking, or threats to harm someone’s family.
Legitimate authority. There are laws & conventions regarding who has authority over what, and these limit what threats are seen as acceptable. Threats can be dangerous and destructive, because of the possibility that they will actually be carried out and because of the risk of escalating threats and counter-threats as people try to influence each other’s behavior, and these conventions about domains of authority help limit the damage. It’s widely accepted that the government is allowed to regulate driving and intellectual property, and to use fines as punishment. The law grants IP-holders rights to sue for money. Forum moderators are understood to have control over what gets posted on their forum, and who posts. But a single forum user does not have the authority to dictate what gets posted on a forum.
Accountability. Those with legitimate authority are usually accountable to a broader public. If citizens oppose a law they can replace the legislators with ones who will change the law, and since legislators know this and want to keep their jobs they pay attention to the citizens’ views when passing laws. Members of an online forum can leave en masse to another forum if they disagree strongly with the moderation policy, and forums take this into account when they set their moderation policy. But one person who threatens to increase existential risk if his preferred forum policy isn’t put into place is not accountable to anyone—it doesn’t matter how many people disagree with his preferred forum policy, or with his proposed punishment.
I’m not entirely in agreement with the first three threats, but they’re at least within the bounds of the kinds of threats that are commonly acceptable. The fourth is not.
My bet would be that he believes that it is proportional. From where I’m standing, this looks like assigning too much impact to LW and to censorship of posts. Note that 2 and 4 are particularly good arguments why something of this nature was dumb regardless of importance.
Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen—it is there to balance his motivation out.
Re #2: Letting Christians/Republicans know that they should be interested in passing a law is not the same as hostage taking or harming someone’s family. I agree that narrow targeting is preferable.
Re #3 and #4: I have a right to tell Christians/Republicans about a law they’re likely to feel should be passed—it’s a right granted to me by the country I live in. I can tell them about that law for whatever reason I want. That’s also a right granted to me by the country I live in. By definition this is legitimate authority, because a legitimate authority granted me these rights.
Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen—it is there to balance his motivation out.
Citation? That sounds like an insane thing for Eliezer to have said.
After reviewing my copies of the deleted post, I can say that he doesn’t say this explicitly. I was remembering another commenter who was trying to work out the implications on x-risk of having viewed the basilisk.
EY does say things that directly imply he thinks the post is a basilisk because of an x-risk increase, but he does not say what he thinks that increase is.
Edit: can’t reply, no karma. It means I don’t know if it’s proportional.
One thing that Eliezer takes care to avoid doing is giving his actual numbers regarding the existential possibilities. And that is an extremely wise decision. Not everyone has fully internalised the idea behind Shut Up and Do The Impossible! Even if Eliezer believed that all of the work he and the SIAI may do will only improve our existential expectation by the kind of tiny amount you mention it would most likely still be the right choice to go ahead and do exactly what he is trying to do. But not everyone is that good at multiplication.
All four examples involve threats—one party threatening to punish another unless the other party obeys some rule—but the last threat (threatening to increase existential risk contingent on acts of forum moderation) sticks out as different from the others in several ways:
Proportionality. The punishment in the other examples seems roughly proportional to the offense ($500 may seem a bit high for one album, but is in the ballpark given the low chance of being caught), but over 6,000 deaths (in expectation) plus preventing who-knows-how-many people from ever living is disproportionate to the offense of deleting forum comments.
Narrow targeting. Most of the punishments are narrowly targeted at the offender—the offender is the one who suffers the negative consequences of the punishment, as much as possible (although there are some broader consequences—e.g., the rest of the forum is deprived of a banned poster’s comments). But the existential risk threat is not targeted at all—it’s aimed at the whole world. Threats to third parties are usually frowned upon—think of hostage taking, or threats to harm someone’s family.
Legitimate authority. There are laws & conventions regarding who has authority over what, and these limit what threats are seen as acceptable. Threats can be dangerous and destructive, because of the possibility that they will actually be carried out and because of the risk of escalating threats and counter-threats as people try to influence each other’s behavior, and these conventions about domains of authority help limit the damage. It’s widely accepted that the government is allowed to regulate driving and intellectual property, and to use fines as punishment. The law grants IP-holders rights to sue for money. Forum moderators are understood to have control over what gets posted on their forum, and who posts. But a single forum user does not have the authority to dictate what gets posted on a forum.
Accountability. Those with legitimate authority are usually accountable to a broader public. If citizens oppose a law they can replace the legislators with ones who will change the law, and since legislators know this and want to keep their jobs they pay attention to the citizens’ views when passing laws. Members of an online forum can leave en masse to another forum if they disagree strongly with the moderation policy, and forums take this into account when they set their moderation policy. But one person who threatens to increase existential risk if his preferred forum policy isn’t put into place is not accountable to anyone—it doesn’t matter how many people disagree with his preferred forum policy, or with his proposed punishment.
I’m not entirely in agreement with the first three threats, but they’re at least within the bounds of the kinds of threats that are commonly acceptable. The fourth is not.
And 5. Ridiculousness. “He threatened what? … And they took it seriously?”
(Posted as an example of a way this is notably different to the typical example. Note that this is also my reaction, but I might well be wrong.)
My bet would be that he believes that it is proportional. From where I’m standing, this looks like assigning too much impact to LW and to censorship of posts. Note that 2 and 4 are particularly good arguments why something of this nature was dumb regardless of importance.
Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen—it is there to balance his motivation out.
Re #2: Letting Christians/Republicans know that they should be interested in passing a law is not the same as hostage taking or harming someone’s family. I agree that narrow targeting is preferable.
Re #3 and #4: I have a right to tell Christians/Republicans about a law they’re likely to feel should be passed—it’s a right granted to me by the country I live in. I can tell them about that law for whatever reason I want. That’s also a right granted to me by the country I live in. By definition this is legitimate authority, because a legitimate authority granted me these rights.
Citation? That sounds like an insane thing for Eliezer to have said.
After reviewing my copies of the deleted post, I can say that he doesn’t say this explicitly. I was remembering another commenter who was trying to work out the implications on x-risk of having viewed the basilisk.
EY does say things that directly imply he thinks the post is a basilisk because of an x-risk increase, but he does not say what he thinks that increase is.
Edit: can’t reply, no karma. It means I don’t know if it’s proportional.
Nod. That makes more sense.
One thing that Eliezer takes care to avoid doing is giving his actual numbers regarding the existential possibilities. And that is an extremely wise decision. Not everyone has fully internalised the idea behind Shut Up and Do The Impossible! Even if Eliezer believed that all of the work he and the SIAI may do will only improve our existential expectation by the kind of tiny amount you mention it would most likely still be the right choice to go ahead and do exactly what he is trying to do. But not everyone is that good at multiplication.
Does that mean you’re backing away from your assertion of proportionality?
Or just that you’re using a different argument to support it?
I’m pretty sure that this is false.
I’m fairly certain this is false.