Joe doesn’t know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.
I’m also not sure it’s possible for the agent to anticipate choosing option 2, given the information it has.
Not sure what you mean here.
Finally, what does it matter whether a change increases expected utility under the new function?
It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U’, but Joe-with-U’ can’t maximize U’ by changing back to U.
Joe doesn’t know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.
Not sure what you mean here.
It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U’, but Joe-with-U’ can’t maximize U’ by changing back to U.