So you imagine your current self in such a situation. So do I: and I reach the same conclusion as you:
“Right. No, I don’t want to give you $100.”
I then go on to show why that’s the case. Actually the article might be better if I wrote Bellman’s equation and showed how the terms involving “heads” appearing drop off when you enter the “tails appeared” states.
In other words, the quote from MBlume is just wrong: a rational agent is perfectly capable of wanting to precommit to a given action in a given situation, while not performing that action in that situation. Rather, a perfectly rational and powerful rational agent, one that has pre-actions available that will put it in certain special states, will always perform the action.
The question is how one can actually precommit. Eliezer claims that he has precommitted. I am genuinely curious to know how he has done that in the absence of brain hacking.
Let me ask you a question. Suppose you were transported to Omega world (as I define it in the article). Suppose you then came to the same conclusions that Vladimir Nesov asks us to take as facts: that Omega is trustworthy, etc. Would you then seek to modify yourself such that you would definitely pay Omega $100?
So you imagine your current self in such a situation.
I don’t think we’re on the same page. I imagine myself in a different situation in which there is a tails only coin. I give the same result as you but disagree as to whether it matches that of Vladmir’s counterfactual. There is no p = 0.5 involved.
But that isn’t nearly as interesting as the question of how one can actually precommit. Eliezer claims that he has precommitted. I am genuinely curious to know how he has done that in the absence of brain hacking.
Eliezer did not claim that he has already precommitted in Vladmir’s counterfactual thread. It would have surprised me if he had. I can recall Eliezer claiming that precommittment is not necessary to one box on the Newcomb problem. Have you made the assumption that handing over the $100 proves that you have made a precommitment?
I don’t think we’re on the same page. I imagine myself in a different situation in which there is a tails only coin.
How is it different? If you get zapped to Omega world, then you are in some deterministic universe, but you don’t know which one exactly. You could be in a universe where Omega was going to flip tails (and some other things are true which you don’t know about), or one where Omega was going to flip heads (and some other things are true which you don’t know about), and you are in complete ignorance as to which set of universes you now find yourself in. Then either Omega will appear and tell you that you’re in a “heads” universe, and pay you nothing, or appear and tell you that you’re in a “tails” universe, in which case you will discover that you don’t want to pay Omega $100. As would I.
Have you made the assumption that handing over the $100 proves that you have made a precommitment?
It proves that either:
a) you are literally incapable of doing otherwise
b) you genuinely get more benefit/utility from handing the $100 over than from keeping it, where “benefit” is a property of your brain that you rationally act to maximize.
c) your actions are irrational, in the sense that you could have taken another action with higher utility.
When I refer to “you”, I mean “whoever you happen to be at the moment Omega appears^W^W you make your decision”, not “you as you would be if pushed forward through time to that moment”.
Let me ask you a question. Suppose you were transported to Omega world (as I define it in the article). Suppose you then came to the same conclusions that Vladimir Nesov asks us to take as facts: that Omega is trustworthy, etc. Would you then seek to modify yourself such that you would definitely pay Omega $100?
No situation that yourself or Vladmir have proposed here has been one in which I would seek to modify myself.
So you imagine your current self in such a situation. So do I: and I reach the same conclusion as you:
I then go on to show why that’s the case. Actually the article might be better if I wrote Bellman’s equation and showed how the terms involving “heads” appearing drop off when you enter the “tails appeared” states.
In other words, the quote from MBlume is just wrong: a rational agent is perfectly capable of wanting to precommit to a given action in a given situation, while not performing that action in that situation. Rather, a perfectly rational and powerful rational agent, one that has pre-actions available that will put it in certain special states, will always perform the action.
The question is how one can actually precommit. Eliezer claims that he has precommitted. I am genuinely curious to know how he has done that in the absence of brain hacking.
Let me ask you a question. Suppose you were transported to Omega world (as I define it in the article). Suppose you then came to the same conclusions that Vladimir Nesov asks us to take as facts: that Omega is trustworthy, etc. Would you then seek to modify yourself such that you would definitely pay Omega $100?
I don’t think we’re on the same page. I imagine myself in a different situation in which there is a tails only coin. I give the same result as you but disagree as to whether it matches that of Vladmir’s counterfactual. There is no p = 0.5 involved.
Eliezer did not claim that he has already precommitted in Vladmir’s counterfactual thread. It would have surprised me if he had. I can recall Eliezer claiming that precommittment is not necessary to one box on the Newcomb problem. Have you made the assumption that handing over the $100 proves that you have made a precommitment?
How is it different? If you get zapped to Omega world, then you are in some deterministic universe, but you don’t know which one exactly. You could be in a universe where Omega was going to flip tails (and some other things are true which you don’t know about), or one where Omega was going to flip heads (and some other things are true which you don’t know about), and you are in complete ignorance as to which set of universes you now find yourself in. Then either Omega will appear and tell you that you’re in a “heads” universe, and pay you nothing, or appear and tell you that you’re in a “tails” universe, in which case you will discover that you don’t want to pay Omega $100. As would I.
It proves that either: a) you are literally incapable of doing otherwise b) you genuinely get more benefit/utility from handing the $100 over than from keeping it, where “benefit” is a property of your brain that you rationally act to maximize. c) your actions are irrational, in the sense that you could have taken another action with higher utility.
When I refer to “you”, I mean “whoever you happen to be at the moment Omega appears^W^W you make your decision”, not “you as you would be if pushed forward through time to that moment”.
No situation that yourself or Vladmir have proposed here has been one in which I would seek to modify myself.
What is the smallest alteration to the situations proposed in which you would?