I think the proposed definition (the one inspired by Ramsey, de Finetti, et al) assumes too much about values and our knowledge of our values. Let’s consider your example:
For example, suppose that E and F are the both the event “humanity survives for millions of years” and you have the opportunity to push a button that will guarantee this with probability p and otherwise guarantee that this does not happen.
“Okay Djinni,” I say, “since your yellow button gives a 90% probability that humanity survives for millions of years, I’ll go ahead and push—”
“Mwahah—oops, I mean, what are you waiting for, torekp?”
“Hey! No fair, this button guarantees that we survive, but in horrible agony! Let me look at this purple button instead. Ah, that’s better, people survive in complete comfort. I’ll go ahead—”
“Mwah—er, never mind me, I’m just getting over a cold.”
“Like hell you are! I just noticed that the purple button puts people in near-stasis. They’ll live for millions of years on my clock, but their subjective time is nearly nil! OK, purple button’s out; let’s look at green...”
This could go on ad infinitum—or until we figure out exactly what our terminal values are, which is even longer. Part of the problem is that the value of F, the event I originally wanted, could depend on the value of E, the objectively-random process we’re betting on. But wait, here’s where it gets really interesting: part of my reason for varying my valuation of F based on E may be the very fact of objective risk associated with E.
Maybe F is more exciting if I obtain it in a risky way. Or, maybe it becomes a lesser achievement for me when it is a matter of luck rather than pure skill. Either way, nonlinearities and discontinuities threaten to pop up and ruin the suggested interpretation of my betting choices as an expression of my epistemic probability.
I think the proposed definition (the one inspired by Ramsey, de Finetti, et al) assumes too much about values and our knowledge of our values. Let’s consider your example:
“Okay Djinni,” I say, “since your yellow button gives a 90% probability that humanity survives for millions of years, I’ll go ahead and push—”
“Mwahah—oops, I mean, what are you waiting for, torekp?”
“Hey! No fair, this button guarantees that we survive, but in horrible agony! Let me look at this purple button instead. Ah, that’s better, people survive in complete comfort. I’ll go ahead—”
“Mwah—er, never mind me, I’m just getting over a cold.”
“Like hell you are! I just noticed that the purple button puts people in near-stasis. They’ll live for millions of years on my clock, but their subjective time is nearly nil! OK, purple button’s out; let’s look at green...”
This could go on ad infinitum—or until we figure out exactly what our terminal values are, which is even longer. Part of the problem is that the value of F, the event I originally wanted, could depend on the value of E, the objectively-random process we’re betting on. But wait, here’s where it gets really interesting: part of my reason for varying my valuation of F based on E may be the very fact of objective risk associated with E.
Maybe F is more exciting if I obtain it in a risky way. Or, maybe it becomes a lesser achievement for me when it is a matter of luck rather than pure skill. Either way, nonlinearities and discontinuities threaten to pop up and ruin the suggested interpretation of my betting choices as an expression of my epistemic probability.