Ok what’s the difference here? By “utilitarianism” do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?
I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
The term “utilitarianism” refers to maximising the combined happiness of all people. The page says:
Utilitarianism is an ethical theory holding that the proper course of action is the one that maximizes the overall “happiness”.
So: that’s a particular class of utility functions.
“Expected utility maximization” is a more general framework from decision theory. You can use any utility function with it—and you can use it to model practically any agent.
Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural—due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).
Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does “consequentialism” usually imply normal human value, or is it usually a general term?
Ok what’s the difference here? By “utilitarianism” do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?
I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
The term “utilitarianism” refers to maximising the combined happiness of all people. The page says:
So: that’s a particular class of utility functions.
“Expected utility maximization” is a more general framework from decision theory. You can use any utility function with it—and you can use it to model practically any agent.
Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural—due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).
Thanks.
Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does “consequentialism” usually imply normal human value, or is it usually a general term?
See http://en.wikipedia.org/wiki/Consequentialism for your last question (it’s a general term).
The answer to your “Is there a name...” question is “no”—AFAIK.
I get the impression that most people around here approach morality from that perspective, it seems like something that ought to have a name.