The normal way to resolve unilateralist curse effects is to see how many people agree / disagree, and go with the majority. (Even if the action is irreversible, as long as everyone knows that and has taken that into account, going with the majority seems fine.)
Pro: it saves an expected life. Con: LW frontpage probably goes down for a day. Con: It causes some harm to trust. Pro: It reinforces the norm of actually considering consequences, and not holding any value too sacred.
Overall I lean towards the benefits outweighing the costs, so I support this offer.
Pro: It reinforces the norm of actually considering consequences, and not holding any value too sacred.
Not an expert here, but my impression was sometimes it can be useful to have “sacred values” in certain decision-theoretic contexts (like “I will one-box in Newcomb’s Problem even if consequentialist reasoning says otherwise”?) If I had to choose a sacred value to adopt, cooperating in epistemic prisoners’ dilemmas actually seems like a relatively good choice?
I will one-box in Newcomb’s Problem even if consequentialist reasoning says otherwise
I don’t think of Newcomb’s problem as being a disagreement about consequentialism; it’s about causality. I’d mostly agree with the statement “I will one-box in Newcomb’s Problem even if causal reasoning says otherwise” (though really I would want to add more nuance).
I feel relatively confident that most decision theorists at MIRI would agree with me on this.
If I had to choose a sacred value to adopt, cooperating in epistemic prisoners’ dilemmas actually seems like a relatively good choice?
In a real prisoner’s dilemma, you get defected against if you do that. You also need to take into account how the other player reasons. (I don’t know what you mean by epistemic prisoner’s dilemmas, perhaps that distinction is important.)
I also want to note that “take the majority vote of the relevant stakeholders” seems to be very much in line with “cooperating in epistemic prisoner’s dilemmas”, so if the offer did go through, I would expect this to strengthen that particular norm. See also this comment.
my impression was sometimes it can be useful to have “sacred values” in certain decision-theoretic contexts
I would not put it this way. It depends on what future situations you expect to be in. You might want to keep honesty as a sacred value, and tell an ax-murderer where your friend is, if you think that one day you will have to convince aliens that we do not intend them harm in order to avert a huge war. Most of us don’t expect that, so we don’t keep honesty as a sacred value. Ultimately it does all boil down to consequences.
The policy of “if two people object then the plan doesn’t go through” sets up a unilateralist-curse scenario for the people against the plan—after the first person says no, every future person is now able to unilaterally stop the plan, regardless of how many people are in favor of it. (See also Scott’s comment.) Ideally we’d avoid that; majority vote of comments does so (and seems like the principled solution).
(Though at this point it’s probably moot given the existing number of nays.)
Let’s, for the hell of it, assume real money got involved. Like, it was $50M or something.
Now — who would you want to be able to vote on whether destruction happens if their values aren’t met with that amount of money at stake?
If it’s the whole internet, most people will treat it as entertainment or competition as opposed to considering what we actually care about.
But if we’re going to limit it only to people that are thoughtful, that invalidates the point of majority vote doesn’t it?
Think about it, I’m not going to write out all the implications, but I think your faith in crowdsourced voting mechanisms for things with known-short-payoff against with long-unknown-costs that destroy long-unknown-gains is perhaps misplaced...?
Most people are — factually speaking — not educated on all relevant topics, not fully numerate on statistics and payoff calculations, go with their feelings instead of analysis, and are short-term thinkers.......…
The normal way to resolve unilateralist curse effects is to see how many people agree / disagree, and go with the majority. (Even if the action is irreversible, as long as everyone knows that and has taken that into account, going with the majority seems fine.)
Pro: it saves an expected life. Con: LW frontpage probably goes down for a day. Con: It causes some harm to trust. Pro: It reinforces the norm of actually considering consequences, and not holding any value too sacred.
Overall I lean towards the benefits outweighing the costs, so I support this offer.
ETA: I also have codes.
Not an expert here, but my impression was sometimes it can be useful to have “sacred values” in certain decision-theoretic contexts (like “I will one-box in Newcomb’s Problem even if consequentialist reasoning says otherwise”?) If I had to choose a sacred value to adopt, cooperating in epistemic prisoners’ dilemmas actually seems like a relatively good choice?
I don’t think of Newcomb’s problem as being a disagreement about consequentialism; it’s about causality. I’d mostly agree with the statement “I will one-box in Newcomb’s Problem even if causal reasoning says otherwise” (though really I would want to add more nuance).
I feel relatively confident that most decision theorists at MIRI would agree with me on this.
In a real prisoner’s dilemma, you get defected against if you do that. You also need to take into account how the other player reasons. (I don’t know what you mean by epistemic prisoner’s dilemmas, perhaps that distinction is important.)
I also want to note that “take the majority vote of the relevant stakeholders” seems to be very much in line with “cooperating in epistemic prisoner’s dilemmas”, so if the offer did go through, I would expect this to strengthen that particular norm. See also this comment.
I would not put it this way. It depends on what future situations you expect to be in. You might want to keep honesty as a sacred value, and tell an ax-murderer where your friend is, if you think that one day you will have to convince aliens that we do not intend them harm in order to avert a huge war. Most of us don’t expect that, so we don’t keep honesty as a sacred value. Ultimately it does all boil down to consequences.
If we could figure out some reasonable way to poll people I agree, but I don’t see a good way to do that, especially not on this timescale?
Presumably you could take the majority vote of comments left in a 2 hour span?
^ Yeah, that.
The policy of “if two people object then the plan doesn’t go through” sets up a unilateralist-curse scenario for the people against the plan—after the first person says no, every future person is now able to unilaterally stop the plan, regardless of how many people are in favor of it. (See also Scott’s comment.) Ideally we’d avoid that; majority vote of comments does so (and seems like the principled solution).
(Though at this point it’s probably moot given the existing number of nays.)
Let’s, for the hell of it, assume real money got involved. Like, it was $50M or something.
Now — who would you want to be able to vote on whether destruction happens if their values aren’t met with that amount of money at stake?
If it’s the whole internet, most people will treat it as entertainment or competition as opposed to considering what we actually care about.
But if we’re going to limit it only to people that are thoughtful, that invalidates the point of majority vote doesn’t it?
Think about it, I’m not going to write out all the implications, but I think your faith in crowdsourced voting mechanisms for things with known-short-payoff against with long-unknown-costs that destroy long-unknown-gains is perhaps misplaced...?
Most people are — factually speaking — not educated on all relevant topics, not fully numerate on statistics and payoff calculations, go with their feelings instead of analysis, and are short-term thinkers.......…
I agree that in general this is a problem, but I think in this particular case we have the obvious choice of the set of all people with launch codes.
(Btw, your counterargument also applies to the unilateralist curse itself.)
I’m surprised that LW being down for a day isn’t on your list of cons. [ETA: or rather the LW home page]
It could also be on the list of pros, depending on how one uses LW.
I feel obligated to note that it will in fact only destroy the frontpage of LW, not the rest of the site.
Ah. I thought it was the entire site. (Though it did say “Frontpage” in the post.)
Good point, added, doesn’t change the conclusion.
I’ll note that giving someone the launch codes merely increases the chance of the homepage going down.