I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.
I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.