Reformulation to weed out uninteresting objections: Omega knows expected utility according to your preference if you go on without its intervention U1 and utility if it kills you U0U1.
My answer: even in a deterministic world, I take the lottery as many times as Omega has to offer, knowing that the probability of death tends to certainty as I go on. This example is only invalid for money because of diminishing returns. If you really do possess the ability to double utility, low probability of positive outcome gets squashed by high utility of that outcome.
There’s an excellent paper by Peter le Blanc indicating that under reasonable assumptions, if you utility function is unbounded, then you can’t compute finite expected utilities. So if Omega can double your utility an unlimited number of times, you have other problems that cripple you in the absence of involvement from Omega. Doubling your utility should be a mathematical impossibility at some point.
That demolishes “Shut up and Multiply”, IMO.
SIAI apparently paid Peter to produce that. It should get more attention here.
So if Omega can double your utility an unlimited number of times
This was not assumed, I even explicitly said things like “I take the lottery as many times as Omega has to offer” and “If you really do possess the ability to double utility”. To the extent doubling of utility is actually provided (and no more), we should take the lottery.
Also, if your utility function’s scope is not limited to perception-sequences, Peter’s result doesn’t directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.
Also, if your utility function’s scope is not limited to perception-sequences, Peter’s result doesn’t directly apply.
How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.
The utility function needs inputs describing your situation. The only available inputs are your perceptions.
Not so. There’s also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.
(This is of course not relevant to Peter’s model, but if you want to look at the underlying questions, then these strange constructions apply.)
Does my entire post boil down to this seeming paradox?
(Yes, I assume Omega can actually double utility.)
The use of U1 and U0 is needlessly confusing. And it changes the game, because now, U0 is a utility associated with a single draw, and the analysis of doing repeated draws will give different answers. There’s also too much change in going from “you die” to “you get utility U0″. There’s some semantic trickiness there.
Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.
Well, that leaves me even less optimistic than before. As long as it’s just me saying, “We have options A, B, and C, but I don’t think any of them work,” there are a thousand possible ways I could turn out to be wrong. But if it reduces to a math problem, and we can’t figure out a way around that math problem, hope is harder.
Reformulation to weed out uninteresting objections: Omega knows expected utility according to your preference if you go on without its intervention U1 and utility if it kills you U0U1.
My answer: even in a deterministic world, I take the lottery as many times as Omega has to offer, knowing that the probability of death tends to certainty as I go on. This example is only invalid for money because of diminishing returns. If you really do possess the ability to double utility, low probability of positive outcome gets squashed by high utility of that outcome.
There’s an excellent paper by Peter le Blanc indicating that under reasonable assumptions, if you utility function is unbounded, then you can’t compute finite expected utilities. So if Omega can double your utility an unlimited number of times, you have other problems that cripple you in the absence of involvement from Omega. Doubling your utility should be a mathematical impossibility at some point.
That demolishes “Shut up and Multiply”, IMO.
SIAI apparently paid Peter to produce that. It should get more attention here.
This was not assumed, I even explicitly said things like “I take the lottery as many times as Omega has to offer” and “If you really do possess the ability to double utility”. To the extent doubling of utility is actually provided (and no more), we should take the lottery.
Also, if your utility function’s scope is not limited to perception-sequences, Peter’s result doesn’t directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.
How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.
Not so. There’s also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.
(This is of course not relevant to Peter’s model, but if you want to look at the underlying questions, then these strange constructions apply.)
Does my entire post boil down to this seeming paradox?
(Yes, I assume Omega can actually double utility.)
The use of U1 and U0 is needlessly confusing. And it changes the game, because now, U0 is a utility associated with a single draw, and the analysis of doing repeated draws will give different answers. There’s also too much change in going from “you die” to “you get utility U0″. There’s some semantic trickiness there.
Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.
Well, that leaves me even less optimistic than before. As long as it’s just me saying, “We have options A, B, and C, but I don’t think any of them work,” there are a thousand possible ways I could turn out to be wrong. But if it reduces to a math problem, and we can’t figure out a way around that math problem, hope is harder.