An ideally moral agent would be a consequentialist (though I won’t say “utilitarian” for fear of endorsing the Mere Addition Population Ethic). However, actual humans have very limited powers of imagination, very limited knowledge of ourselves, and very little power of prediction. We can’t be perfect consequentialists, because we’re horrible at imagining and predicting the consequences of our actions—or even how we will actually feel about things when they happen.
We thus employ any number of other things as admissible heuristics. Virtue ethics are used to encourage the rote learning architecture of our lower brains to learn behaviors that usually have good consequences, making those behaviors easier to generate on demand (like Qiaochu_Yuan said). Deontological rules are used to approximate our beliefs about what actions usually and predictably have good consequences.
When our heuristics break down, we often have enough context, detailed facts, and knowledge of which of our many actual cares are relevant to actually think over the real consequentialist issues.
An ideally moral agent would be a consequentialist (though I won’t say “utilitarian” for fear of endorsing the Mere Addition Population Ethic). However, actual humans have very limited powers of imagination, very limited knowledge of ourselves, and very little power of prediction. We can’t be perfect consequentialists, because we’re horrible at imagining and predicting the consequences of our actions—or even how we will actually feel about things when they happen.
We thus employ any number of other things as admissible heuristics. Virtue ethics are used to encourage the rote learning architecture of our lower brains to learn behaviors that usually have good consequences, making those behaviors easier to generate on demand (like Qiaochu_Yuan said). Deontological rules are used to approximate our beliefs about what actions usually and predictably have good consequences.
When our heuristics break down, we often have enough context, detailed facts, and knowledge of which of our many actual cares are relevant to actually think over the real consequentialist issues.