I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons … ?
All moral intuitions evolved for not-actually-moral reasons, because evolution is an amoral process. That is not a reason to write any of them off, though. Or perhaps I should say, it is only a reason to “write them off” to the extent that it feels like it is, and the fact that it sometimes does, to some people, is as fine an example as any of the inescapable irrationality of moral intuitions.
If we have the moral intuition, does that make the thing of moral value, regardless of its origins?
Why would one ever regard anything as having moral value, except as a consequence of some moral intuition? And if one has a second moral intuition to the effect that the first moral intuition is invalid on account of its “origins,” what is one to do, except reflect on the matter, and heed whichever of these conflicting intuitions is stronger?
This actually gets at a deeper issue, which I might as well lay out now, having to do with my reasons for rejecting the idea that utilitarianism, consequentialism, or really any abstract principle or system of ethics, can be correct in a normative sense. (I think I would be called a moral noncognitivist, but my knowledge of the relevant literature is tissue-thin.) On a purely descriptive level, I agree with Kaj’s take on the “parliamentary model” of ethics: I feel (as I assume most humans do) a lot of distinct and often conflicting sentiments about what is right and wrong, good and bad, just and unjust. (I could say the same about non-ethical value judgements, e.g. beautiful vs. ugly, yummy vs. yucky, etc.) I also have sentiments about what I want and don’t want that I regard as being purely motivated by self-interest. It’s not always easy, of course, to mark the boundary between selfishly and morally motivated sentiments, but to the extent that I can, I try to disregard the former when deciding what I endorse as morally correct, even though selfishness sometimes (often, tbh) prevails over morality in guiding my actions.
On a prescriptive level, on the other hand, I think it would be incoherent for me to endorse any abstract ethical principle, except as a rule of thumb which is liable to admit any number of exceptions, because, in my experience, trying to deduce ethical judgements from first principles invariably leads to conclusions that feel wrong to me. And whereas I can honestly say something like “Many of my current beliefs are incorrect, I just don’t know which ones,” because I believe that there is an objective physical reality to which my descriptive beliefs could be compared, I don’t think there is any analogous objective moral reality against which my moral opinions could be held up and judged correct or incorrect. The best I can say is that, based on past experience, I anticipate that my future self is likely to regard my present self as morally misguided about some things.
Obviously, this isn’t much help if one is looking to encode human preferences in a way that would be legible to AI systems. I do think it’s useful, for that purpose, to study what moral intuitions humans tend to have, and how individuals resolve internal conflicts between them. So in that sense, it is useful to notice patterns like the resemblance between our intuitions about property rights and the act-omission distinction, and try to figure out why we think that way.
All moral intuitions evolved for not-actually-moral reasons, because evolution is an amoral process. That is not a reason to write any of them off, though. Or perhaps I should say, it is only a reason to “write them off” to the extent that it feels like it is, and the fact that it sometimes does, to some people, is as fine an example as any of the inescapable irrationality of moral intuitions.
Why would one ever regard anything as having moral value, except as a consequence of some moral intuition? And if one has a second moral intuition to the effect that the first moral intuition is invalid on account of its “origins,” what is one to do, except reflect on the matter, and heed whichever of these conflicting intuitions is stronger?
This actually gets at a deeper issue, which I might as well lay out now, having to do with my reasons for rejecting the idea that utilitarianism, consequentialism, or really any abstract principle or system of ethics, can be correct in a normative sense. (I think I would be called a moral noncognitivist, but my knowledge of the relevant literature is tissue-thin.) On a purely descriptive level, I agree with Kaj’s take on the “parliamentary model” of ethics: I feel (as I assume most humans do) a lot of distinct and often conflicting sentiments about what is right and wrong, good and bad, just and unjust. (I could say the same about non-ethical value judgements, e.g. beautiful vs. ugly, yummy vs. yucky, etc.) I also have sentiments about what I want and don’t want that I regard as being purely motivated by self-interest. It’s not always easy, of course, to mark the boundary between selfishly and morally motivated sentiments, but to the extent that I can, I try to disregard the former when deciding what I endorse as morally correct, even though selfishness sometimes (often, tbh) prevails over morality in guiding my actions.
On a prescriptive level, on the other hand, I think it would be incoherent for me to endorse any abstract ethical principle, except as a rule of thumb which is liable to admit any number of exceptions, because, in my experience, trying to deduce ethical judgements from first principles invariably leads to conclusions that feel wrong to me. And whereas I can honestly say something like “Many of my current beliefs are incorrect, I just don’t know which ones,” because I believe that there is an objective physical reality to which my descriptive beliefs could be compared, I don’t think there is any analogous objective moral reality against which my moral opinions could be held up and judged correct or incorrect. The best I can say is that, based on past experience, I anticipate that my future self is likely to regard my present self as morally misguided about some things.
Obviously, this isn’t much help if one is looking to encode human preferences in a way that would be legible to AI systems. I do think it’s useful, for that purpose, to study what moral intuitions humans tend to have, and how individuals resolve internal conflicts between them. So in that sense, it is useful to notice patterns like the resemblance between our intuitions about property rights and the act-omission distinction, and try to figure out why we think that way.