I really want to say that you should pay. Obviously you should precommit to not paying if you can, and then the oracle will never visit you to begin with unless you are about to die anyway. But if you can’t do that, and the oracle shows up at your door, you have a choice to pay and live or not pay and die.
Again, obviously it’s better to not pay and then you never end up in this situation in the first place. But when it actually happens and you have to sit down and choose between paying it to go away or dying, I would choose to pay it.
It’s all well and good to say that some decision theory results in optimal outcomes. It’s another to actually implement it in yourself. To make sure every counter factual version of yourself makes the globally optimal choice, even if there is a huge cost to some of them.
The traditional LW solution to this is that you precommit once and for all to this: Whenever I find myself in a situation where I wish that I had committed to acting in accordance with a rule R I will act in accordance with R.
That’s great to say, but much harder to actually do.
For example, if Omega pays $1,000 to people or asks them to commit suicide. But it only asks people it knows100% will not do it, otherwise it gives them the money.
The best strategy is to precommit to suicide if Omega asks. But if Omega does ask, I doubt most lesswrongers would actually go through with it.
So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.
Your formulation, however, doesn’t work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don’t precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.
Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it’s hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.
(I think I am willing to bite the suicide bullet as long as we’re clear that I would require truly extraordinary evidence.)
Please Don’t Fight the Hypothetical. I agree with you if you are only 99% sure, but the premise is that you know Omega is right with certainty. Obviously that is implausible, but so is the entire situation with an omniscient being asking people to commit suicide, or oracles that can predict if you will die.
But if you like you can have a lesser cost, like Omega asking you to pay $10,000. Or some amount of money significant enough to seriously consider just giving away.
I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?
I am not trying to fight the hypothetical, I am trying to explain why one’s intuition cannot resist fighting it. This makes the answer I give seem unintuitive.
I really want to say that you should pay. Obviously you should precommit to not paying if you can, and then the oracle will never visit you to begin with unless you are about to die anyway. But if you can’t do that, and the oracle shows up at your door, you have a choice to pay and live or not pay and die.
Again, obviously it’s better to not pay and then you never end up in this situation in the first place. But when it actually happens and you have to sit down and choose between paying it to go away or dying, I would choose to pay it.
It’s all well and good to say that some decision theory results in optimal outcomes. It’s another to actually implement it in yourself. To make sure every counter factual version of yourself makes the globally optimal choice, even if there is a huge cost to some of them.
The traditional LW solution to this is that you precommit once and for all to this: Whenever I find myself in a situation where I wish that I had committed to acting in accordance with a rule R I will act in accordance with R.
That’s great to say, but much harder to actually do.
For example, if Omega pays $1,000 to people or asks them to commit suicide. But it only asks people it knows100% will not do it, otherwise it gives them the money.
The best strategy is to precommit to suicide if Omega asks. But if Omega does ask, I doubt most lesswrongers would actually go through with it.
So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.
Your formulation, however, doesn’t work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don’t precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.
Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it’s hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.
(I think I am willing to bite the suicide bullet as long as we’re clear that I would require truly extraordinary evidence.)
Please Don’t Fight the Hypothetical. I agree with you if you are only 99% sure, but the premise is that you know Omega is right with certainty. Obviously that is implausible, but so is the entire situation with an omniscient being asking people to commit suicide, or oracles that can predict if you will die.
But if you like you can have a lesser cost, like Omega asking you to pay $10,000. Or some amount of money significant enough to seriously consider just giving away.
I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?
I am not trying to fight the hypothetical, I am trying to explain why one’s intuition cannot resist fighting it. This makes the answer I give seem unintuitive.