I hadn’t even considered the possibility that your definition might lead to a ‘utilitarianism’ that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to ‘rule utilitarianism’, but more interesting—the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?
I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.
I hadn’t even considered the possibility that your definition might lead to a ‘utilitarianism’ that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to ‘rule utilitarianism’, but more interesting—the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?
I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.