I don’t know what “error: divide by zero” means in this context. Could you please clarify? (If you’re suggesting that the problem is ill-posed under some decision theories because the question assumes that it is possible to make a choice but the oracle’s ability to predict you means you cannot really choose, how doesn’t that apply to the original problem?)
You want to figure out whether to do as the oracle asks or not. To do this, you would like to predict what will happen in each case. But you have no evidence concerning the case where you don’t do as it asks, because so far everyone has obliged. So, e.g., Pr(something good happens | decline oracle’s request) has Pr(decline oracle’s request) in the denominator, and that’s zero.
I think you can say something similar about the original problem. P(decline oracle’s request) can (for the new problem) also be phrased as P(oracle is wrong). And P(oracle is wrong) is zero in both problems; there’s no evidence in either the original problem or the new problem concerning the case where the oracle is wrong.
Of course, the usual Newcomb arguments apply about why you shouldn’t consider the case where the oracle is wrong, but they don’t distinguish the problems.
For the baseline, “underlying” probability of the oracle’s request being declined. Roughly speaking, if you have never seen X happen, it does not mean that X will never happen (=has a probability of zero).
This assumes you’re a passive observer, by the way—if you are actively making a decision whether to accept or decline the request you can’t apply Bayesian probabilities to your own actions.
Every decision theory I throw at it says either don’t pay or Error: Divide By Zero. Is this a trick question?
I don’t know what “error: divide by zero” means in this context. Could you please clarify? (If you’re suggesting that the problem is ill-posed under some decision theories because the question assumes that it is possible to make a choice but the oracle’s ability to predict you means you cannot really choose, how doesn’t that apply to the original problem?)
You want to figure out whether to do as the oracle asks or not. To do this, you would like to predict what will happen in each case. But you have no evidence concerning the case where you don’t do as it asks, because so far everyone has obliged. So, e.g., Pr(something good happens | decline oracle’s request) has Pr(decline oracle’s request) in the denominator, and that’s zero.
I think you can say something similar about the original problem. P(decline oracle’s request) can (for the new problem) also be phrased as P(oracle is wrong). And P(oracle is wrong) is zero in both problems; there’s no evidence in either the original problem or the new problem concerning the case where the oracle is wrong.
Of course, the usual Newcomb arguments apply about why you shouldn’t consider the case where the oracle is wrong, but they don’t distinguish the problems.
That’s a forward-looking probability and is certainly not zero.
In the absence of evidence you just fall back on your prior.
In order to get Error: Divide By Zero, you have to be using a particular kind of decision theory and assume P(decline oracle’s request) = 0.
Your prior for what?
For the baseline, “underlying” probability of the oracle’s request being declined. Roughly speaking, if you have never seen X happen, it does not mean that X will never happen (=has a probability of zero).
This assumes you’re a passive observer, by the way—if you are actively making a decision whether to accept or decline the request you can’t apply Bayesian probabilities to your own actions.