But note that humans are far from fully consequentialist, since we often obey deontological constraints or constraints on the types of reasoning we endorse.
I think the ways in which humans are not fully consequentialist is much broader—we often do things because of habit, instinct, because doing that thing feels rewarding itself, because we’re imitating someone else, etc.
Probably because humans are not always doing optimization? That does raise an interesting question: is satisfying the first two criteria (which basically make you an optimizer) a necessary condition for satisfying the third one?
I think the ways in which humans are not fully consequentialist is much broader—we often do things because of habit, instinct, because doing that thing feels rewarding itself, because we’re imitating someone else, etc.
Probably because humans are not always doing optimization? That does raise an interesting question: is satisfying the first two criteria (which basically make you an optimizer) a necessary condition for satisfying the third one?