GreedyAlgorithm, this is the conversation I want to have.
The sentence in your argument that I cannot swallow is this one: “Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.” This is circular, is it not?
You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory (“shut up and calculate”). You ask me to consider X = {x_i}, the set of many decisions over my life (“after a while”). You say that the expected value of U(X) is only maximized when the expected value of U(x_i) is maximized for each i. True enough. But why should I want to maximize the expected value of U(X)? That requires every bit as much (and perhaps the same) justification as maximizing the expected value of U(x_i) for each i, which is what you sought to establish.
This whole argument only washes if you assume that things work “normally” (eg like they do in the real field, eg are subject to the axioms that make addition/subtraction/calculus work). In fact we know that utility doesn’t behave normally when considering multiple agents (as proved by arrows impossibility theorm), so the “correct” answer is that we can’t have a true pareto-optimal solution to the eye-dust-vs-torture problem. There is no reason why you couldn’t contstruct a ring/field/group for utility which produced some of the solutions the OP dismisses, and in fact IMO those would be better representations of human utility than a straight normal interpretation.
GreedyAlgorithm, this is the conversation I want to have.
The sentence in your argument that I cannot swallow is this one: “Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.” This is circular, is it not?
You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory (“shut up and calculate”). You ask me to consider X = {x_i}, the set of many decisions over my life (“after a while”). You say that the expected value of U(X) is only maximized when the expected value of U(x_i) is maximized for each i. True enough. But why should I want to maximize the expected value of U(X)? That requires every bit as much (and perhaps the same) justification as maximizing the expected value of U(x_i) for each i, which is what you sought to establish.
This whole argument only washes if you assume that things work “normally” (eg like they do in the real field, eg are subject to the axioms that make addition/subtraction/calculus work). In fact we know that utility doesn’t behave normally when considering multiple agents (as proved by arrows impossibility theorm), so the “correct” answer is that we can’t have a true pareto-optimal solution to the eye-dust-vs-torture problem. There is no reason why you couldn’t contstruct a ring/field/group for utility which produced some of the solutions the OP dismisses, and in fact IMO those would be better representations of human utility than a straight normal interpretation.