I will one-box in Newcomb’s Problem even if consequentialist reasoning says otherwise
I don’t think of Newcomb’s problem as being a disagreement about consequentialism; it’s about causality. I’d mostly agree with the statement “I will one-box in Newcomb’s Problem even if causal reasoning says otherwise” (though really I would want to add more nuance).
I feel relatively confident that most decision theorists at MIRI would agree with me on this.
If I had to choose a sacred value to adopt, cooperating in epistemic prisoners’ dilemmas actually seems like a relatively good choice?
In a real prisoner’s dilemma, you get defected against if you do that. You also need to take into account how the other player reasons. (I don’t know what you mean by epistemic prisoner’s dilemmas, perhaps that distinction is important.)
I also want to note that “take the majority vote of the relevant stakeholders” seems to be very much in line with “cooperating in epistemic prisoner’s dilemmas”, so if the offer did go through, I would expect this to strengthen that particular norm. See also this comment.
my impression was sometimes it can be useful to have “sacred values” in certain decision-theoretic contexts
I would not put it this way. It depends on what future situations you expect to be in. You might want to keep honesty as a sacred value, and tell an ax-murderer where your friend is, if you think that one day you will have to convince aliens that we do not intend them harm in order to avert a huge war. Most of us don’t expect that, so we don’t keep honesty as a sacred value. Ultimately it does all boil down to consequences.
I don’t think of Newcomb’s problem as being a disagreement about consequentialism; it’s about causality. I’d mostly agree with the statement “I will one-box in Newcomb’s Problem even if causal reasoning says otherwise” (though really I would want to add more nuance).
I feel relatively confident that most decision theorists at MIRI would agree with me on this.
In a real prisoner’s dilemma, you get defected against if you do that. You also need to take into account how the other player reasons. (I don’t know what you mean by epistemic prisoner’s dilemmas, perhaps that distinction is important.)
I also want to note that “take the majority vote of the relevant stakeholders” seems to be very much in line with “cooperating in epistemic prisoner’s dilemmas”, so if the offer did go through, I would expect this to strengthen that particular norm. See also this comment.
I would not put it this way. It depends on what future situations you expect to be in. You might want to keep honesty as a sacred value, and tell an ax-murderer where your friend is, if you think that one day you will have to convince aliens that we do not intend them harm in order to avert a huge war. Most of us don’t expect that, so we don’t keep honesty as a sacred value. Ultimately it does all boil down to consequences.