I think that what really does my head in about this problem is, although I may right now be motivated to make a commitment, because of the hope of winning the 10K, nonetheless my commitment cannot rely on that motivation, because when it comes to the crunch, that possibility has evaporated and the associated motivation is gone. I can only make an effective commitment if I have something more persistent—like the suggested $1000 contract with a third party. Without that, I cannot trust my future self to follow through, because the reasons that I would currently like it to follow through will no longer apply.
MBlume stated that if you want to be known as the sort of person who’ll do X given Y, then when Y turns up, you’d better do X. That’s a good principle—but it too can’t apply, unless at the point of being presented with the request for $100, you still care about being known as that sort of person—in other words, you expect a later repetition of the scenario in some form or another. This applies as well to Eliezer’s reasoning about how to design a self-modifying decision agent—which will have to make many future decisions of the same kind.
Just wanting the 10K isn’t enough to make an effective precommitment. You need some motivation that will persist in the face of no longer having the possibility of the 10K.
It seems to me the answer becomes more obvious when you stop imagining the counterfactual you who would have won the $10000, and start imagining the 50% of superpositions of you who are currently winning the $10000 in their respective worlds.
Every implementation of you is you, and half of them are winning $10000 as the other half lose $100. Take one for the team.
Sorry, but I’m not in the habit of taking one for the quantum superteam. And I don’t think that it really helps to solve the problem; it just means that you don’t necessarily care so much about winning any more. Not exactly the point.
Plus we are explicitly told that the coin is deterministic and comes down tails in the majority of worlds.
Sorry, but I’m not in the habit of taking one for the quantum superteam.
If you’re not willing to “take one for the team” of superyous, I’m not sure you understand the implications of “every implementation of you is you.”
And I don’t think that it really helps to solve the problem;
It does solve the problem, though, because it’s a consistent way to formalize the decision so that on average for things like this you are winning.
it just means that you don’t necessarily care so much about winning any more. Not exactly the point.
I think you’re missing the point here. Winning in this case is doing the thing that on average nets you the most success for problems of this class, one single instance of it notwithstanding.
Plus we are explicitly told that the coin is deterministic and comes down tails in the majority of worlds.
And this explains why you’re missing the point. We are told no such thing. We are told it’s a fair coin and that can only mean that if you divide up worlds by their probability density, you win in half of them. This is defined.
What seems to be confusing you is that you’re told “in this particular problem, for the sake of argument, assume you’re in one of the worlds where you lose.” It states nothing about those worlds being over represented.
We are told no such thing. We are told it’s a fair coin and that can only mean that if you divide up worlds by their probability density, you win in half of them. This is defined.
No, take another look:
in the overwhelming measure of the MWI worlds it gives the same outcome. You don’t care about a fraction that sees a different result, in all reality the result is that Omega won’t even consider giving you $10000, it only asks for your $100.
I think that what really does my head in about this problem is, although I may right now be motivated to make a commitment, because of the hope of winning the 10K, nonetheless my commitment cannot rely on that motivation, because when it comes to the crunch, that possibility has evaporated and the associated motivation is gone. I can only make an effective commitment if I have something more persistent—like the suggested $1000 contract with a third party. Without that, I cannot trust my future self to follow through, because the reasons that I would currently like it to follow through will no longer apply.
MBlume stated that if you want to be known as the sort of person who’ll do X given Y, then when Y turns up, you’d better do X. That’s a good principle—but it too can’t apply, unless at the point of being presented with the request for $100, you still care about being known as that sort of person—in other words, you expect a later repetition of the scenario in some form or another. This applies as well to Eliezer’s reasoning about how to design a self-modifying decision agent—which will have to make many future decisions of the same kind.
Just wanting the 10K isn’t enough to make an effective precommitment. You need some motivation that will persist in the face of no longer having the possibility of the 10K.
It seems to me the answer becomes more obvious when you stop imagining the counterfactual you who would have won the $10000, and start imagining the 50% of superpositions of you who are currently winning the $10000 in their respective worlds.
Every implementation of you is you, and half of them are winning $10000 as the other half lose $100. Take one for the team.
Sorry, but I’m not in the habit of taking one for the quantum superteam. And I don’t think that it really helps to solve the problem; it just means that you don’t necessarily care so much about winning any more. Not exactly the point.
Plus we are explicitly told that the coin is deterministic and comes down tails in the majority of worlds.
If you’re not willing to “take one for the team” of superyous, I’m not sure you understand the implications of “every implementation of you is you.”
It does solve the problem, though, because it’s a consistent way to formalize the decision so that on average for things like this you are winning.
I think you’re missing the point here. Winning in this case is doing the thing that on average nets you the most success for problems of this class, one single instance of it notwithstanding.
And this explains why you’re missing the point. We are told no such thing. We are told it’s a fair coin and that can only mean that if you divide up worlds by their probability density, you win in half of them. This is defined.
What seems to be confusing you is that you’re told “in this particular problem, for the sake of argument, assume you’re in one of the worlds where you lose.” It states nothing about those worlds being over represented.
No, take another look: