GreedyAlgorithm, this is the conversation I want to have.
The sentence in your argument that I cannot swallow is this one: “Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.” This is circular, is it not?
You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory (“shut up and calculate”). You ask me to consider X = {x_i}, the set of many decisions over my life (“after a while”). You say that the expected value of U(X) is only maximized when the expected value of U(x_i) is maximized for each i. True enough. But why should I want to maximize the expected value of U(X)? That requires every bit as much (and perhaps the same) justification as maximizing the expected value of U(x_i) for each i, which is what you sought to establish.
(I should say that I assumed that a bag of decisions is worth as much as the sum of the utilities of the individual decisions.)