This is essentially just another version of the smoking lesion problem, in that there is no connection, causal or otherwise, beween the thing you care about and the action you take. Your decision theory has no specific effect on your likelyhood of dying, that being determined entirely by environmental factors that do not even attempt to predict you. All you are paying for is to determine whether or not you get a visit from the oracle.
ETA: Here’s a UDT game tree (see here for an explanation of the format) of this problem, under the assumption that oracle visits everyone meeting his criteria, and uses exclusive-or:
ETA2: More explanation: the colours are states of knowledge. Blue = oracle asks for money, Orange = they leave you alone. Let’s say the odds of being healthy are α. If you Pay the expected reward is α(-1000) + (1-α) DEATH; if you Don’t Pay the expected reward is α 0 + (1-α) DEATH. Clearly (under UDT) paying is worse by a term of -1000α.
This is essentially just another version of the smoking lesion problem, in that there is no connection, causal or otherwise, beween the thing you care about and the action you take. Your decision theory has no specific effect on your likelyhood of dying, that being determined entirely by environmental factors that do not even attempt to predict you. All you are paying for is to determine whether or not you get a visit from the oracle.
ETA: Here’s a UDT game tree (see here for an explanation of the format) of this problem, under the assumption that oracle visits everyone meeting his criteria, and uses exclusive-or:
ETA2: More explanation: the colours are states of knowledge. Blue = oracle asks for money, Orange = they leave you alone. Let’s say the odds of being healthy are α. If you Pay the expected reward is
α(-1000) + (1-α) DEATH
; if you Don’t Pay the expected reward isα 0 + (1-α) DEATH
. Clearly (under UDT) paying is worse by a term of-1000α
.