So if Omega can double your utility an unlimited number of times
This was not assumed, I even explicitly said things like “I take the lottery as many times as Omega has to offer” and “If you really do possess the ability to double utility”. To the extent doubling of utility is actually provided (and no more), we should take the lottery.
Also, if your utility function’s scope is not limited to perception-sequences, Peter’s result doesn’t directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.
Also, if your utility function’s scope is not limited to perception-sequences, Peter’s result doesn’t directly apply.
How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.
The utility function needs inputs describing your situation. The only available inputs are your perceptions.
Not so. There’s also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.
(This is of course not relevant to Peter’s model, but if you want to look at the underlying questions, then these strange constructions apply.)
This was not assumed, I even explicitly said things like “I take the lottery as many times as Omega has to offer” and “If you really do possess the ability to double utility”. To the extent doubling of utility is actually provided (and no more), we should take the lottery.
Also, if your utility function’s scope is not limited to perception-sequences, Peter’s result doesn’t directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.
How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.
Not so. There’s also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.
(This is of course not relevant to Peter’s model, but if you want to look at the underlying questions, then these strange constructions apply.)