The problem of comparing different people’s utility functions applies to average utilitarianism as well, doesn’t it? For instance if your utility function is U and my utility function is V, then the average could be (U + V)/2 : however utility functions can be rescaled by any linear function, so let’s make mine 1000000 x V. Now the average is U/2 + 500000 x V, which seems totally fair doesn’t it? Is the right solution here to assume that each person’s utility has a “best possible” case, and a “worst possible” case, and to rescale, assigning 1 to each person’s best case, and 0 to their worst? That works fine if people have bounded utility, which we apparently do (it’s one reason we don’t fall for Pascal muggings).
It’s true that no-one optimises utility perfectly, but even animals, plants and bacteria have an identifiable utility function (inclusive fitness), which they optimise pretty well. Why shouldn’t people? And, to first approximation, why wouldn’t a human’s utility function also be inclusive fitness? (We can add other approximations as necessary, e.g. some sort of fitness function for culture or memes.)
Do you think utility functions should be defined over “worlds” or “states”? Decision theory only requires worlds, but consequentialism seems to require states. For instance if each world w consists of a sequence of states indexed by time t, then a consequentialist utility function applied to a whole world would look like U(w) = Sum d(t) x u(s(t)) where d(t) is the discount factor, and u is the utility function applied to states. Deontologists would have a completely different sort of U, but they are not immediately irrational because of that. (Seems they can still be consistent with formal decision theory.)
Looking at your paper on Anthropic Decision Theory, what do you think will happen if we adopt a compromise utility function somewhere between average and total utility, much as you suggest? Is the result more like SIA or SSA? Does it contain some of the strengths of each while avoiding their weaknesses? (It strikes me that the result is more like SSA, since you are avoiding the “large” utilities from total utility dominating the calculation, but I haven’t tried to do the math, and wondered if you already had...)
Do you have views on “rule” versus “act” utilitarianism? It seems to me that advanced decision theories like TDT, UDT or ADT are already invoking a form of rule utilitarianism, right? Further that rule utilitarianism is a better “model” for our moral judgements than act utilitarianism.
Good article! Here are a few related questions:
The problem of comparing different people’s utility functions applies to average utilitarianism as well, doesn’t it? For instance if your utility function is U and my utility function is V, then the average could be (U + V)/2 : however utility functions can be rescaled by any linear function, so let’s make mine 1000000 x V. Now the average is U/2 + 500000 x V, which seems totally fair doesn’t it? Is the right solution here to assume that each person’s utility has a “best possible” case, and a “worst possible” case, and to rescale, assigning 1 to each person’s best case, and 0 to their worst? That works fine if people have bounded utility, which we apparently do (it’s one reason we don’t fall for Pascal muggings).
It’s true that no-one optimises utility perfectly, but even animals, plants and bacteria have an identifiable utility function (inclusive fitness), which they optimise pretty well. Why shouldn’t people? And, to first approximation, why wouldn’t a human’s utility function also be inclusive fitness? (We can add other approximations as necessary, e.g. some sort of fitness function for culture or memes.)
Do you think utility functions should be defined over “worlds” or “states”? Decision theory only requires worlds, but consequentialism seems to require states. For instance if each world w consists of a sequence of states indexed by time t, then a consequentialist utility function applied to a whole world would look like U(w) = Sum d(t) x u(s(t)) where d(t) is the discount factor, and u is the utility function applied to states. Deontologists would have a completely different sort of U, but they are not immediately irrational because of that. (Seems they can still be consistent with formal decision theory.)
Looking at your paper on Anthropic Decision Theory, what do you think will happen if we adopt a compromise utility function somewhere between average and total utility, much as you suggest? Is the result more like SIA or SSA? Does it contain some of the strengths of each while avoiding their weaknesses? (It strikes me that the result is more like SSA, since you are avoiding the “large” utilities from total utility dominating the calculation, but I haven’t tried to do the math, and wondered if you already had...)
Do you have views on “rule” versus “act” utilitarianism? It seems to me that advanced decision theories like TDT, UDT or ADT are already invoking a form of rule utilitarianism, right? Further that rule utilitarianism is a better “model” for our moral judgements than act utilitarianism.