Other objections exist: numerous ones, actually. Forget them. If you find that a certain set of circumstances makes it easier for you to decide not to pay the $100, or to pay it, change the circumstances.
I get the idea of changing the circumstances to make things difficult. Any assumptions of emulation must go! However, while that takes away a convenient way to explain things intuitively to someone who is familiar with the concept of emulation, it doesn’t really change anything. Explicitly saying ‘nothing is simulated’ just seems to obscure things. It’s the decision that is of interest, not the way Omega goes about getting it.
Wherever I look I end up finding that I have either obscured the situation but changed nothing or I have changed the core premises given by Vladmir. Basically, the least convenient possible world is either “whatever way is most confusing way to try to explain things to someone or “a world that is different to Vladmir’s”.
I consider a world with hypercommunication and no quantum mechanics. In that situation then I’d definitely be keeping my $100. But that coin doesn’t qualify as fair or even random. The whole scenario becomes a complicated way of saying that the coin that has a probability of 1 coming up tails and the Omega has given my future self the chance to come back and tell me that before I make the bet. Sure, in the least convenient world you wouldn’t accept that explaination. You also wouldn’t accept “Hey, that coin has both sides tails!”. But however you want to describe it, the coin just isn’t random.
Davidamann suggested that we ask our subject “what would need to change for them to change their belief?” My answer, as best as I can decipher it, is “the nature of randomness”.
Eliezer may very well give $100 whenever he meets this problem; so may Cameron; but I wouldn’t, probably not, anyway.
I will. If you pay me $100 and give me a time machine I’ll also go back in time and kill my grandfather when he was 12. And no, I am not suggesting my grandfather was an early bloomer. I am suggesting that this counterfactual mugging situation seems to amount to progressively stretching reality as we can and trying to thwart Newbcomb before we are forced to admit that we’re just creating a convoluted paradox.
The whole scenario becomes a complicated way of saying that the coin that has a probability of 1 coming up tails and the Omega has given my future self the chance to come back and tell me that before I make the bet.
The coin is deterministic, but your state of knowledge at time t does not include the information (tails will appear with probability 1). Your best estimate is that p(tails) = 1⁄2. Therefore, if you want to maximize your expected utility, given your bounded knowledge, you should, if possible, precommit to paying $100. I explore what that means.
I am suggesting that this counterfactual mugging situation seems to amount to progressively stretching reality as we can and trying to thwart Newbcomb before we are forced to admit that we’re just creating a convoluted paradox.
The actual state of the universe, as I see it, involves a Tegmark ensemble existing. Thinking on those lines led me to the conclusion I gave here. However, I now believe this is the wrong way to think about the problem. If one is incapable of precommitting, then belief in a Tegmark ensemble at least leads me not to inflict huge suffering on other people. If, however, one can precommit, this is a utility improvement whether the Tegmark ensemble exists or not.
ETA: I need to make something extremely clear here. When I say “probability”, you should probably assume I mean it exactly as E.T. Jaynes does. I may occasionally slip up, but you will be ‘less wrong’ if you follow this prescription.
The coin is deterministic, but your state of knowledge at time t does not include the information (tails will appear with probability 1). Your best estimate is that p(tails) = 1⁄2. Therefore, if you want to maximize your expected utility, given your bounded knowledge, you should, if possible, precommit to paying $100. I explore what that means.
My state of knowledge pre-coin toss does not include knowledge of what the coin is. However, with all the caveats, omniscience and counterfactuals thrown in, the state of knowledge that I have when I actually make the decision is that there is a 2⁄2 chance of the coin being tails. Omnisicence + hypercommunication to a decision based on the known outcome effectively gives full disclosure.
In such a fully determined universe I want my past self to have precommited to the side of the bet that turns out was the winner. Since precommitment would be redundant in this case I can just keep my cash, shrug, and walk away. However, in this case p(heads) != 0.5. The probability of tails at time of the decision is 1. No fair coin, just a naive Omega. Different counterfactual.
So you imagine your current self in such a situation. So do I: and I reach the same conclusion as you:
“Right. No, I don’t want to give you $100.”
I then go on to show why that’s the case. Actually the article might be better if I wrote Bellman’s equation and showed how the terms involving “heads” appearing drop off when you enter the “tails appeared” states.
In other words, the quote from MBlume is just wrong: a rational agent is perfectly capable of wanting to precommit to a given action in a given situation, while not performing that action in that situation. Rather, a perfectly rational and powerful rational agent, one that has pre-actions available that will put it in certain special states, will always perform the action.
The question is how one can actually precommit. Eliezer claims that he has precommitted. I am genuinely curious to know how he has done that in the absence of brain hacking.
Let me ask you a question. Suppose you were transported to Omega world (as I define it in the article). Suppose you then came to the same conclusions that Vladimir Nesov asks us to take as facts: that Omega is trustworthy, etc. Would you then seek to modify yourself such that you would definitely pay Omega $100?
So you imagine your current self in such a situation.
I don’t think we’re on the same page. I imagine myself in a different situation in which there is a tails only coin. I give the same result as you but disagree as to whether it matches that of Vladmir’s counterfactual. There is no p = 0.5 involved.
But that isn’t nearly as interesting as the question of how one can actually precommit. Eliezer claims that he has precommitted. I am genuinely curious to know how he has done that in the absence of brain hacking.
Eliezer did not claim that he has already precommitted in Vladmir’s counterfactual thread. It would have surprised me if he had. I can recall Eliezer claiming that precommittment is not necessary to one box on the Newcomb problem. Have you made the assumption that handing over the $100 proves that you have made a precommitment?
I don’t think we’re on the same page. I imagine myself in a different situation in which there is a tails only coin.
How is it different? If you get zapped to Omega world, then you are in some deterministic universe, but you don’t know which one exactly. You could be in a universe where Omega was going to flip tails (and some other things are true which you don’t know about), or one where Omega was going to flip heads (and some other things are true which you don’t know about), and you are in complete ignorance as to which set of universes you now find yourself in. Then either Omega will appear and tell you that you’re in a “heads” universe, and pay you nothing, or appear and tell you that you’re in a “tails” universe, in which case you will discover that you don’t want to pay Omega $100. As would I.
Have you made the assumption that handing over the $100 proves that you have made a precommitment?
It proves that either:
a) you are literally incapable of doing otherwise
b) you genuinely get more benefit/utility from handing the $100 over than from keeping it, where “benefit” is a property of your brain that you rationally act to maximize.
c) your actions are irrational, in the sense that you could have taken another action with higher utility.
When I refer to “you”, I mean “whoever you happen to be at the moment Omega appears^W^W you make your decision”, not “you as you would be if pushed forward through time to that moment”.
Let me ask you a question. Suppose you were transported to Omega world (as I define it in the article). Suppose you then came to the same conclusions that Vladimir Nesov asks us to take as facts: that Omega is trustworthy, etc. Would you then seek to modify yourself such that you would definitely pay Omega $100?
No situation that yourself or Vladmir have proposed here has been one in which I would seek to modify myself.
My state of knowledge pre-coin toss does not include knowledge of what the coin is. However, with all the caveats, omniscience and counterfactuals thrown in, the state of knowledge that I have when I actually make the decision is that there is a 2⁄2 chance of the coin being tails. Omnisicence + hypercommunication to a decision based on the known outcome effectively gives full disclosure.
I retract this statement. I have a suspicion that the addition of the ‘least convenient world’ criterion has overwhelmed my working memory with my current expertise in exotic decision making. I would need to think more on exactly what the implications are of novel restrictions before I could be confident.
I will say that in the situation you describe I would give $100. If the stakes were raised significantly, however, it would be worth my while sitting down with a pen and paper, going over the least convenient possible physical laws of the universe (that would then be known to me) and considering the problem in more depth. Discarding Quantum Mechanics in particular confuses me.
What I do assert with more confidence is that I would never wish to modify myself such that I made different decisions than I do. I would, rather, just make the decision itself.
I get the idea of changing the circumstances to make things difficult. Any assumptions of emulation must go! However, while that takes away a convenient way to explain things intuitively to someone who is familiar with the concept of emulation, it doesn’t really change anything. Explicitly saying ‘nothing is simulated’ just seems to obscure things. It’s the decision that is of interest, not the way Omega goes about getting it.
Wherever I look I end up finding that I have either obscured the situation but changed nothing or I have changed the core premises given by Vladmir. Basically, the least convenient possible world is either “whatever way is most confusing way to try to explain things to someone or “a world that is different to Vladmir’s”.
I consider a world with hypercommunication and no quantum mechanics. In that situation then I’d definitely be keeping my $100. But that coin doesn’t qualify as fair or even random. The whole scenario becomes a complicated way of saying that the coin that has a probability of 1 coming up tails and the Omega has given my future self the chance to come back and tell me that before I make the bet. Sure, in the least convenient world you wouldn’t accept that explaination. You also wouldn’t accept “Hey, that coin has both sides tails!”. But however you want to describe it, the coin just isn’t random.
Davidamann suggested that we ask our subject “what would need to change for them to change their belief?” My answer, as best as I can decipher it, is “the nature of randomness”.
I will. If you pay me $100 and give me a time machine I’ll also go back in time and kill my grandfather when he was 12. And no, I am not suggesting my grandfather was an early bloomer. I am suggesting that this counterfactual mugging situation seems to amount to progressively stretching reality as we can and trying to thwart Newbcomb before we are forced to admit that we’re just creating a convoluted paradox.
The coin is deterministic, but your state of knowledge at time t does not include the information (tails will appear with probability 1). Your best estimate is that p(tails) = 1⁄2. Therefore, if you want to maximize your expected utility, given your bounded knowledge, you should, if possible, precommit to paying $100. I explore what that means.
The actual state of the universe, as I see it, involves a Tegmark ensemble existing. Thinking on those lines led me to the conclusion I gave here. However, I now believe this is the wrong way to think about the problem. If one is incapable of precommitting, then belief in a Tegmark ensemble at least leads me not to inflict huge suffering on other people. If, however, one can precommit, this is a utility improvement whether the Tegmark ensemble exists or not.
ETA: I need to make something extremely clear here. When I say “probability”, you should probably assume I mean it exactly as E.T. Jaynes does. I may occasionally slip up, but you will be ‘less wrong’ if you follow this prescription.
My state of knowledge pre-coin toss does not include knowledge of what the coin is. However, with all the caveats, omniscience and counterfactuals thrown in, the state of knowledge that I have when I actually make the decision is that there is a 2⁄2 chance of the coin being tails. Omnisicence + hypercommunication to a decision based on the known outcome effectively gives full disclosure.
In such a fully determined universe I want my past self to have precommited to the side of the bet that turns out was the winner. Since precommitment would be redundant in this case I can just keep my cash, shrug, and walk away. However, in this case p(heads) != 0.5. The probability of tails at time of the decision is 1. No fair coin, just a naive Omega. Different counterfactual.
So you imagine your current self in such a situation. So do I: and I reach the same conclusion as you:
I then go on to show why that’s the case. Actually the article might be better if I wrote Bellman’s equation and showed how the terms involving “heads” appearing drop off when you enter the “tails appeared” states.
In other words, the quote from MBlume is just wrong: a rational agent is perfectly capable of wanting to precommit to a given action in a given situation, while not performing that action in that situation. Rather, a perfectly rational and powerful rational agent, one that has pre-actions available that will put it in certain special states, will always perform the action.
The question is how one can actually precommit. Eliezer claims that he has precommitted. I am genuinely curious to know how he has done that in the absence of brain hacking.
Let me ask you a question. Suppose you were transported to Omega world (as I define it in the article). Suppose you then came to the same conclusions that Vladimir Nesov asks us to take as facts: that Omega is trustworthy, etc. Would you then seek to modify yourself such that you would definitely pay Omega $100?
I don’t think we’re on the same page. I imagine myself in a different situation in which there is a tails only coin. I give the same result as you but disagree as to whether it matches that of Vladmir’s counterfactual. There is no p = 0.5 involved.
Eliezer did not claim that he has already precommitted in Vladmir’s counterfactual thread. It would have surprised me if he had. I can recall Eliezer claiming that precommittment is not necessary to one box on the Newcomb problem. Have you made the assumption that handing over the $100 proves that you have made a precommitment?
How is it different? If you get zapped to Omega world, then you are in some deterministic universe, but you don’t know which one exactly. You could be in a universe where Omega was going to flip tails (and some other things are true which you don’t know about), or one where Omega was going to flip heads (and some other things are true which you don’t know about), and you are in complete ignorance as to which set of universes you now find yourself in. Then either Omega will appear and tell you that you’re in a “heads” universe, and pay you nothing, or appear and tell you that you’re in a “tails” universe, in which case you will discover that you don’t want to pay Omega $100. As would I.
It proves that either: a) you are literally incapable of doing otherwise b) you genuinely get more benefit/utility from handing the $100 over than from keeping it, where “benefit” is a property of your brain that you rationally act to maximize. c) your actions are irrational, in the sense that you could have taken another action with higher utility.
When I refer to “you”, I mean “whoever you happen to be at the moment Omega appears^W^W you make your decision”, not “you as you would be if pushed forward through time to that moment”.
No situation that yourself or Vladmir have proposed here has been one in which I would seek to modify myself.
What is the smallest alteration to the situations proposed in which you would?
I retract this statement. I have a suspicion that the addition of the ‘least convenient world’ criterion has overwhelmed my working memory with my current expertise in exotic decision making. I would need to think more on exactly what the implications are of novel restrictions before I could be confident.
I will say that in the situation you describe I would give $100. If the stakes were raised significantly, however, it would be worth my while sitting down with a pen and paper, going over the least convenient possible physical laws of the universe (that would then be known to me) and considering the problem in more depth. Discarding Quantum Mechanics in particular confuses me.
What I do assert with more confidence is that I would never wish to modify myself such that I made different decisions than I do. I would, rather, just make the decision itself.