2-boxing: 0.99 * small prize + 0.01 * (small prize + big prize)
Making a decision based on that will depend on the specifics of the problem (how much bigger is the big prize than the small prize?) and your circumstances (what is your utility function with respect to the big and small prizes?)
Hmm, does this not depend on how the Oracle is making its decision? I feel like there might be versions of this that look more like the smoking lesion problem – for instance, what if the Oracle is simply using a (highly predictive) proxy to determine whether you’ll 1-box or 2-box? (Say, imagine if people from cities 1-box 99% of the time, and people from the country 2-box 99% of the time, and the Oracle is just looking at where you’re from).
It seems like this might become a discussion of Aleatory vs Epistemic Uncertainty. I like this way of describing the distinction between the two (from here—pdf):
In distinguishing between aleatory variability and epistemic uncertainty it can be helpful
to think how you would describe, in words, the parameter under consideration. If the
parameter sometimes has one value and sometimes has another values, then it has
aleatory variability. That is, the variability is random. If the parameter always has either
one value or another, but we are not sure which it is, then the parameter has epistemic
uncertainty.
I believe that the differences between classical decision theory and FDT’s only occur in the context of aleatory uncertainty (although in some formulations of newcomb’s paradox there’s no actual uncertainty). That is, if you are in an epistemically uncertain environment, then FDT and classical decision theory will agree on all problems (hopefully by saying this I can cause someone to come up with a juicy counterexample).
In your example, it is unclear to me what sort of uncertainty the problem possesses because I don’t know enough about the oracle.
In the simple example where a quantum coin with a 99% chance of coming up heads is flipped to determine whether the oracle gives the right answer or the wrong answer, then the answer I gave above is right. Use expected value under the assumptions of FDT; classical decision theory will lead you to 2-box and that would lower your expected gains.
In your example relying on demographic information, it will depend a bit on what sorts of information count as “demographic” in nature. If you are, in this moment by reading this comment on lesswrong, forming the sort of self that will result in you 1-boxing or 2-boxing and that information is also an input to this sort of oracle, then I encourage you to 1-box on the oracle you had in mind.
The expected value under FDT would be:
1-boxing: 0.99 * big prize + 0.01 * 0
2-boxing: 0.99 * small prize + 0.01 * (small prize + big prize)
Making a decision based on that will depend on the specifics of the problem (how much bigger is the big prize than the small prize?) and your circumstances (what is your utility function with respect to the big and small prizes?)
Hmm, does this not depend on how the Oracle is making its decision? I feel like there might be versions of this that look more like the smoking lesion problem – for instance, what if the Oracle is simply using a (highly predictive) proxy to determine whether you’ll 1-box or 2-box? (Say, imagine if people from cities 1-box 99% of the time, and people from the country 2-box 99% of the time, and the Oracle is just looking at where you’re from).
It seems like this might become a discussion of Aleatory vs Epistemic Uncertainty. I like this way of describing the distinction between the two (from here—pdf):
I believe that the differences between classical decision theory and FDT’s only occur in the context of aleatory uncertainty (although in some formulations of newcomb’s paradox there’s no actual uncertainty). That is, if you are in an epistemically uncertain environment, then FDT and classical decision theory will agree on all problems (hopefully by saying this I can cause someone to come up with a juicy counterexample).
In your example, it is unclear to me what sort of uncertainty the problem possesses because I don’t know enough about the oracle.
In the simple example where a quantum coin with a 99% chance of coming up heads is flipped to determine whether the oracle gives the right answer or the wrong answer, then the answer I gave above is right. Use expected value under the assumptions of FDT; classical decision theory will lead you to 2-box and that would lower your expected gains.
In your example relying on demographic information, it will depend a bit on what sorts of information count as “demographic” in nature. If you are, in this moment by reading this comment on lesswrong, forming the sort of self that will result in you 1-boxing or 2-boxing and that information is also an input to this sort of oracle, then I encourage you to 1-box on the oracle you had in mind.