That’s why we need to decode the cognitive algorithms that generate our questions about value and morality. … So how can the Empathic Metaethicist answer Alex’s question? We don’t know the details yet. For example, we don’t have a completed cognitive neuroscience.
Assume you have a complete knowledge of all details of the way human brain works, and a detailed trace of the sequence of neurological events that leads people to ask moral questions. Then what?
My only guess is that you look this trace over using your current moral judgment, and decide that you expect that changing certain things in the algorithm will make the judgments of this brain better. But this is not a FAI-grade tool for defining morality (unless we have to go the uploads-driven way, in which case you just gradually and manually improve humans for a very long time).
I think you have hit the nail on the head. There may surely be lots of scientifically interesting and useful reasons for investigating the fine details of the brain processes which eventuate in behaviours -or the uttering of words- which we interpret as moral, but it is far from obvious that this kind of knowledge will advance our understanding of morality.
More generally, there is plausibly a tension (at least on the surface) between two dominant themes on this site:
1) Naturalism: All knowledge—including of what’s rational (moral)- is scientific. To learn what’s rational (moral) our only option is to study our native cognitive endowments.
2) Our/evolution’s imperfection: You can’t trust your untutored native cognitive endowment to make rational (or moral) judgements. Unless we make an effort not to, we make irrational judgements.
Assume you have a complete knowledge of all details of the way human brain works, and a detailed trace of the sequence of neurological events that leads people to ask moral questions. Then what?
My only guess is that you look this trace over using your current moral judgment, and decide that you expect that changing certain things in the algorithm will make the judgments of this brain better. But this is not a FAI-grade tool for defining morality (unless we have to go the uploads-driven way, in which case you just gradually and manually improve humans for a very long time).
Yes, a completed cognitive neuroscience would certainly not be sufficient for defining the motivational system of an FAI.
I think you have hit the nail on the head. There may surely be lots of scientifically interesting and useful reasons for investigating the fine details of the brain processes which eventuate in behaviours -or the uttering of words- which we interpret as moral, but it is far from obvious that this kind of knowledge will advance our understanding of morality.
More generally, there is plausibly a tension (at least on the surface) between two dominant themes on this site:
1) Naturalism: All knowledge—including of what’s rational (moral)- is scientific. To learn what’s rational (moral) our only option is to study our native cognitive endowments.
2) Our/evolution’s imperfection: You can’t trust your untutored native cognitive endowment to make rational (or moral) judgements. Unless we make an effort not to, we make irrational judgements.