As I recall, I made this up to suit my own ends :-(
Wikipedia quibbles with me significantly—stressing the idea that utilitarianism is a form of consequentialism:
“Utilitarianism is the idea that the moral worth of an action is determined solely by its contribution to overall perceivable utility: that is, its contribution to happiness or pleasure as summed among an ill-defined group of people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome.”
I don’t really want “utilitarianism” to refer to a form of consequentialism—thus my
crude attempt at hijacking the term :-|
I hadn’t even considered the possibility that your definition might lead to a ‘utilitarianism’ that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to ‘rule utilitarianism’, but more interesting—the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?
I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.
Perhaps check my references here:
http://timtyler.org/expected_utility_maximisers/
Thanks! I hadn’t heard that definition of utilitarianism before.
As I recall, I made this up to suit my own ends :-(
Wikipedia quibbles with me significantly—stressing the idea that utilitarianism is a form of consequentialism:
“Utilitarianism is the idea that the moral worth of an action is determined solely by its contribution to overall perceivable utility: that is, its contribution to happiness or pleasure as summed among an ill-defined group of people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome.”
I don’t really want “utilitarianism” to refer to a form of consequentialism—thus my crude attempt at hijacking the term :-|
I hadn’t even considered the possibility that your definition might lead to a ‘utilitarianism’ that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to ‘rule utilitarianism’, but more interesting—the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?
I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.