Cox’s theorem shows that degree of belief can be expressed as probabilities.
The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities.
Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic.
Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility.
Your morality is your utility function: your beliefs about how people should live are preferences about they should live.
Add the idea of actually being convinced by arguments (except arguments of the form “this conclusion is absurd, therefore there is likely to be something wrong with the argument”, which are merely the absurdity heuristic) and you get LessWrong utilitarianism.
Utilitarianism is more than just maximizing expected utility, it’s maximizing the world’s expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.
That is a good point, but I think one under-appreciated on LessWrong. It seems to go “rationality, therefore OMG dead babies!!” There is discussion about how to define “the world’s expected utility”, but it has never reached a conclusion.
In addition to the problem of defining “the world’s expected utility”, there is also the separate question of whether it (whatever it is) should be maximized.
Utilitarianism is more than just maximizing expected utility, it’s maximizing the world’s expected utility.
I think this is probably literally correct, but misleading. “Maximizing X’s utility” is generally taken to mean “maximize your own utility function over X”. So in that sense you are quite correct. But if by “maximizing the world’s utility” you mean something more like “maximizing the aggregate utility of everyone in the world”, then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.
Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means—but they’d agree that maximizing one’s own utility is contrary to utilitarianism.
Anyone who believes that “maximizing one’s own utility is contrary to utilitarianism” is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I’m not sure what I can say to make the matter more clear.
Maximizing one’s own utility is practical rationality. Maximizing the world’s aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you’d donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.
However, if utilitarianism is your ethics, the world’s utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.
The confluence of a number of ideas.
Cox’s theorem shows that degree of belief can be expressed as probabilities.
The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities.
Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic.
Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility.
Your morality is your utility function: your beliefs about how people should live are preferences about they should live.
Add the idea of actually being convinced by arguments (except arguments of the form “this conclusion is absurd, therefore there is likely to be something wrong with the argument”, which are merely the absurdity heuristic) and you get LessWrong utilitarianism.
Utilitarianism is more than just maximizing expected utility, it’s maximizing the world’s expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.
That is a good point, but I think one under-appreciated on LessWrong. It seems to go “rationality, therefore OMG dead babies!!” There is discussion about how to define “the world’s expected utility”, but it has never reached a conclusion.
In addition to the problem of defining “the world’s expected utility”, there is also the separate question of whether it (whatever it is) should be maximized.
I think this is probably literally correct, but misleading. “Maximizing X’s utility” is generally taken to mean “maximize your own utility function over X”. So in that sense you are quite correct. But if by “maximizing the world’s utility” you mean something more like “maximizing the aggregate utility of everyone in the world”, then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.
Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means—but they’d agree that maximizing one’s own utility is contrary to utilitarianism.
Anyone who believes that “maximizing one’s own utility is contrary to utilitarianism” is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I’m not sure what I can say to make the matter more clear.
Maximizing one’s own utility is practical rationality. Maximizing the world’s aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you’d donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.
However, if utilitarianism is your ethics, the world’s utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.
It’s the old System I (want ice cream!) vs System 2 (want world peace!) friction again.