It seems plausible that our capacity for moral judgment might mirror our capacity for belief formation in that it includes crude but efficient algorithms like what we call cognitive biases in belief formation. But I don’t think it follows that we can make our moral judgments ‘more accurate’ by removing moral ‘biases’ in favor of some idealized moral formula. What our crude but efficient moral heuristics are approximating is evolutionarily advantageous strategies for our memes and genes. But I don’t really care about replicating the things that programmed me—I just care about what they programmed me to care about.
In belief formation there are likely biases that have evolutionary benefits too- it is easier to deceive others if you sincerely believe you will cooperate when you are in a position to defect without retaliation, for example. But we have an outside standard to check our beliefs against—experience. We know after many iterations of prediction and experiment which reasons for beliefs are reliable and which are not. Obviously, a good epistemology is a lot trickier than I’ve made it sound but it seems like, in principle, we can make our beliefs more accurate by checking them against reality.
I can’t see an analogous standard for moral judgments. This wouldn’t be a big problem if our brains were cleanly divided into value-parts and belief-parts. We could then just fix the belief parts and keep the crude-but-hey-that’s-how-evolution-made-us value parts. But it seems like our values and beliefs are all mixed up in our cognitive soup. We need a sieve.
But I don’t really care about replicating the things that programmed me—I just care about what they programmed me to care about.
Tangential public advisory: I suspect that it is a bad cached pattern to focus on the abstraction where it is memes and genes that created you rather than, say, your ecological-developmental history or your self two years ago or various plausibly ideal futures you would like to bring about &c. In the context of decision theory I’ll sometimes talk about an agent inheriting the decision policy of its creator process which sometimes causes people to go “well I don’t want what evolution wants, nyahhh” which invariably makes me facepalm repeatedly in despair.
It seems plausible that our capacity for moral judgment might mirror our capacity for belief formation in that it includes crude but efficient algorithms like what we call cognitive biases in belief formation. But I don’t think it follows that we can make our moral judgments ‘more accurate’ by removing moral ‘biases’ in favor of some idealized moral formula. What our crude but efficient moral heuristics are approximating is evolutionarily advantageous strategies for our memes and genes. But I don’t really care about replicating the things that programmed me—I just care about what they programmed me to care about.
In belief formation there are likely biases that have evolutionary benefits too- it is easier to deceive others if you sincerely believe you will cooperate when you are in a position to defect without retaliation, for example. But we have an outside standard to check our beliefs against—experience. We know after many iterations of prediction and experiment which reasons for beliefs are reliable and which are not. Obviously, a good epistemology is a lot trickier than I’ve made it sound but it seems like, in principle, we can make our beliefs more accurate by checking them against reality.
I can’t see an analogous standard for moral judgments. This wouldn’t be a big problem if our brains were cleanly divided into value-parts and belief-parts. We could then just fix the belief parts and keep the crude-but-hey-that’s-how-evolution-made-us value parts. But it seems like our values and beliefs are all mixed up in our cognitive soup. We need a sieve.
Tangential public advisory: I suspect that it is a bad cached pattern to focus on the abstraction where it is memes and genes that created you rather than, say, your ecological-developmental history or your self two years ago or various plausibly ideal futures you would like to bring about &c. In the context of decision theory I’ll sometimes talk about an agent inheriting the decision policy of its creator process which sometimes causes people to go “well I don’t want what evolution wants, nyahhh” which invariably makes me facepalm repeatedly in despair.