This depends on what consistency conditions you get to impose on your agent. I agree that for probability distributions E[X-Y] = E[X] - E[Y].
Any computable agent, no matter how rational, isn’t going to have its beliefs closed under all of the obvious consistency conditions, otherwise it would assign P(T) = 1 for each theorem T. This isn’t just a quirk of human irrationality.
Maybe we should specify a subset of the consistency conditions which is achievable, and then we can say that expected utility maximization is optimal if you satisfy those consistency conditions. This is what I have been doing when thinking about these issues, but it doesn’t seem straightforward nor standard.
This depends on what consistency conditions you get to impose on your agent. I agree that for probability distributions E[X-Y] = E[X] - E[Y].
Any computable agent, no matter how rational, isn’t going to have its beliefs closed under all of the obvious consistency conditions, otherwise it would assign P(T) = 1 for each theorem T. This isn’t just a quirk of human irrationality.
Maybe we should specify a subset of the consistency conditions which is achievable, and then we can say that expected utility maximization is optimal if you satisfy those consistency conditions. This is what I have been doing when thinking about these issues, but it doesn’t seem straightforward nor standard.