Thanks for taking the time to try puzzling this out, but I suspect it’s just interestingly wrong. The magic seems to be happening in this paragraph:
Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to xU(1). If he were to modify his utility function such that U’(x) = xU(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function.
I don’t see where U(1/2+p/2) comes from; should that be U(1)+U(p)? I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has. Finally, what does it matter whether a change increases expected utility under the new function? It’s only utility under the old function that matters—changing utility function to almost anything maximizes the new function, including degenerate utility functions like number of paperclips.
Joe doesn’t know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.
I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has.
Not sure what you mean here.
Finally, what does it matter whether a change increases expected utility under the new function?
It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U’, but Joe-with-U’ can’t maximize U’ by changing back to U.
Thanks for taking the time to try puzzling this out, but I suspect it’s just interestingly wrong. The magic seems to be happening in this paragraph:
I don’t see where U(1/2+p/2) comes from; should that be U(1)+U(p)? I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has. Finally, what does it matter whether a change increases expected utility under the new function? It’s only utility under the old function that matters—changing utility function to almost anything maximizes the new function, including degenerate utility functions like number of paperclips.
Joe doesn’t know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.
Not sure what you mean here.
It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U’, but Joe-with-U’ can’t maximize U’ by changing back to U.