See my reply to Wei Dai’s comment. If you have a prior over which situations you will face, and if you’re able to make precommitments and we ignore computational difficulties, then there is only one situation. If you could decide now which decision rule you’ll use in the future, then in a sense that would be the last decision you ever make. And a decision rule that’s optimal with respect to a particular utility function is one that makes every subsequent decision using that same utility function.
From the vantage point of an agent with a prior today, the best thing it can do is adopt a utility function and precommit to maximizing it from now on no matter what. I hope that’s more clear.
See my reply to Wei Dai’s comment. If you have a prior over which situations you will face, and if you’re able to make precommitments and we ignore computational difficulties, then there is only one situation. If you could decide now which decision rule you’ll use in the future, then in a sense that would be the last decision you ever make. And a decision rule that’s optimal with respect to a particular utility function is one that makes every subsequent decision using that same utility function.
From the vantage point of an agent with a prior today, the best thing it can do is adopt a utility function and precommit to maximizing it from now on no matter what. I hope that’s more clear.