What I have never understood is why precommitment to a specific solution is necessary, either as a way of ‘agreeing to cooperate’ with possible simulations (supposing I posit simulations being involved), or more generally as a way of ensuring that I behave as an instantiation of the decision procedure that maximizes expected value.
There are three relevant propositions: A: Predictor predicts I one-box iff I one-box B: Predictor predicts I two-box iff I two-box C: Predictor puts more money in box B than box A iff Predictor predicts I one-box
If I am confident that (A and B and C) then my highest-EV strategy is to one-box. If I am the sort of agent who reliably picks the highest-EV strategy (which around here we call a “rational” agent), then I one-box.
If A and C are true, then Predictor puts more money in box B.
None of that requires any precommitment to figure out. What does precommitment have to do with any of this?
I don’t believe that anyone in this chain said that it was ‘necessary’, and for a strictly rational agent, I don’t believe it is.
However, I am a person, and am not strictly rational. For me, my mental architecture is such that it relies on caching and precomputed decisions, and decisions made under stress may not be the same as those made in contemplative peace and quiet. Precomputation and precommitment is a way of improving the odds that I will make a particular decision under stress.
I agree that humans aren’t strictly rational, and that decisions under stress are less likely to be rational, and that precommitted/rehearsed answers are more likely to arise under stress.
What I have never understood is why precommitment to a specific solution is necessary, either as a way of ‘agreeing to cooperate’ with possible simulations (supposing I posit simulations being involved), or more generally as a way of ensuring that I behave as an instantiation of the decision procedure that maximizes expected value.
There are three relevant propositions:
A: Predictor predicts I one-box iff I one-box
B: Predictor predicts I two-box iff I two-box
C: Predictor puts more money in box B than box A iff Predictor predicts I one-box
If I am confident that (A and B and C) then my highest-EV strategy is to one-box. If I am the sort of agent who reliably picks the highest-EV strategy (which around here we call a “rational” agent), then I one-box.
If A and C are true, then Predictor puts more money in box B.
None of that requires any precommitment to figure out. What does precommitment have to do with any of this?
I don’t believe that anyone in this chain said that it was ‘necessary’, and for a strictly rational agent, I don’t believe it is.
However, I am a person, and am not strictly rational. For me, my mental architecture is such that it relies on caching and precomputed decisions, and decisions made under stress may not be the same as those made in contemplative peace and quiet. Precomputation and precommitment is a way of improving the odds that I will make a particular decision under stress.
I agree that humans aren’t strictly rational, and that decisions under stress are less likely to be rational, and that precommitted/rehearsed answers are more likely to arise under stress.