I’ve been noticing a theme of utilitarianism on this site—can anyone explain this?
More specifically: how did (x)uys rationalize a utilitarian philosophy over an existential, nihilistic, or hedonistic one?
To put it as simply as I could, LessWrongers like to quantify stuff. A more specific instance of this is the fact that, since this website started off as the brainchild of an AI researcher, the prevalent intellectual trends will be those with applicability in AI research. Computers work easily with quantifiable data. As such, if you want to instill human morality into an AI, chances are you’ll at least consider conceptualizing morality in utilitarian terms.
Cox’s theorem shows that degree of belief can be expressed as probabilities.
The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities.
Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic.
Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility.
Your morality is your utility function: your beliefs about how people should live are preferences about they should live.
Add the idea of actually being convinced by arguments (except arguments of the form “this conclusion is absurd, therefore there is likely to be something wrong with the argument”, which are merely the absurdity heuristic) and you get LessWrong utilitarianism.
Utilitarianism is more than just maximizing expected utility, it’s maximizing the world’s expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.
That is a good point, but I think one under-appreciated on LessWrong. It seems to go “rationality, therefore OMG dead babies!!” There is discussion about how to define “the world’s expected utility”, but it has never reached a conclusion.
In addition to the problem of defining “the world’s expected utility”, there is also the separate question of whether it (whatever it is) should be maximized.
Utilitarianism is more than just maximizing expected utility, it’s maximizing the world’s expected utility.
I think this is probably literally correct, but misleading. “Maximizing X’s utility” is generally taken to mean “maximize your own utility function over X”. So in that sense you are quite correct. But if by “maximizing the world’s utility” you mean something more like “maximizing the aggregate utility of everyone in the world”, then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.
Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means—but they’d agree that maximizing one’s own utility is contrary to utilitarianism.
Anyone who believes that “maximizing one’s own utility is contrary to utilitarianism” is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I’m not sure what I can say to make the matter more clear.
Maximizing one’s own utility is practical rationality. Maximizing the world’s aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you’d donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.
However, if utilitarianism is your ethics, the world’s utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.
In general this site focuses on the friendly AI problem, a nihilistic or a hedonistic AI might not be friendly to humans. The notion of an existentialist AI seems to be largely unexplored as far as I know.
I don’t agree. LW takes a microeconomics viewpoint of decision theory and this implicitly involves maximizing some weighted average of everyone’s utility function.
At some point we really need to come up with more words for this stuff so that the whole consequentialism/hedonic-utilitarianism/etc. confusion doesn’t keep coming up.
To the extent that lesswrong has an official ethical system, that system is utilitiarianism with “the fulfillment of complex human values” as a suggested maximand rather than hedons
Huh, I’m not sure actually, I had been thinking of consequentialism as being the general class of ethical theories based on caring about the state of the world, and that it’s utilitarianism when you try to maximize some definition of utility (which could be human value-fulfillment if you tried to reason about it quantitatively). If my usages are unusual I more or less inherited them from the consequentialism faq I think
If you mean Yvain’s, while his stuff is in general excellent, I recommend learning about philosophical nomenclature from actual philosophers, not medics.
I’ve been noticing a theme of utilitarianism on this site—can anyone explain this? More specifically: how did (x)uys rationalize a utilitarian philosophy over an existential, nihilistic, or hedonistic one?
To put it as simply as I could, LessWrongers like to quantify stuff. A more specific instance of this is the fact that, since this website started off as the brainchild of an AI researcher, the prevalent intellectual trends will be those with applicability in AI research. Computers work easily with quantifiable data. As such, if you want to instill human morality into an AI, chances are you’ll at least consider conceptualizing morality in utilitarian terms.
The confluence of a number of ideas.
Cox’s theorem shows that degree of belief can be expressed as probabilities.
The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities.
Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic.
Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility.
Your morality is your utility function: your beliefs about how people should live are preferences about they should live.
Add the idea of actually being convinced by arguments (except arguments of the form “this conclusion is absurd, therefore there is likely to be something wrong with the argument”, which are merely the absurdity heuristic) and you get LessWrong utilitarianism.
Utilitarianism is more than just maximizing expected utility, it’s maximizing the world’s expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.
That is a good point, but I think one under-appreciated on LessWrong. It seems to go “rationality, therefore OMG dead babies!!” There is discussion about how to define “the world’s expected utility”, but it has never reached a conclusion.
In addition to the problem of defining “the world’s expected utility”, there is also the separate question of whether it (whatever it is) should be maximized.
I think this is probably literally correct, but misleading. “Maximizing X’s utility” is generally taken to mean “maximize your own utility function over X”. So in that sense you are quite correct. But if by “maximizing the world’s utility” you mean something more like “maximizing the aggregate utility of everyone in the world”, then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.
Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means—but they’d agree that maximizing one’s own utility is contrary to utilitarianism.
Anyone who believes that “maximizing one’s own utility is contrary to utilitarianism” is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I’m not sure what I can say to make the matter more clear.
Maximizing one’s own utility is practical rationality. Maximizing the world’s aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you’d donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.
However, if utilitarianism is your ethics, the world’s utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.
It’s the old System I (want ice cream!) vs System 2 (want world peace!) friction again.
In general this site focuses on the friendly AI problem, a nihilistic or a hedonistic AI might not be friendly to humans. The notion of an existentialist AI seems to be largely unexplored as far as I know.
To the extent that lesswrong has an official ethical system, that system is definitely not utilitarianism.
I don’t agree. LW takes a microeconomics viewpoint of decision theory and this implicitly involves maximizing some weighted average of everyone’s utility function.
At some point we really need to come up with more words for this stuff so that the whole consequentialism/hedonic-utilitarianism/etc. confusion doesn’t keep coming up.
To the extent that lesswrong has an official ethical system, that system is utilitiarianism with “the fulfillment of complex human values” as a suggested maximand rather than hedons
That would normally be referred to as consequentialism, not utilitarianism.
Huh, I’m not sure actually, I had been thinking of consequentialism as being the general class of ethical theories based on caring about the state of the world, and that it’s utilitarianism when you try to maximize some definition of utility (which could be human value-fulfillment if you tried to reason about it quantitatively). If my usages are unusual I more or less inherited them from the consequentialism faq I think
If you mean Yvain’s, while his stuff is in general excellent, I recommend learning about philosophical nomenclature from actual philosophers, not medics.