This is a question about moral estimation. Simple questions of moral estimation can be resolved by observing reactions of people to situations which they evolved to consider: to save vs. to eat a human baby, for example. For more difficult questions involving unusual or complicated situations, or situations involving contradicting moral pressures, we simply don’t have any means for extraction of information about their moral value. The only experimental apparatus we have are human reactions, and this apparatus has only so much resolution. Quality of theoretical analysis of observations made using this tool is also rather poor.
To move forward, we need better tools, and better theory. Both could be obtained by improving humans, by making smarter humans that can consider more detailed situations and perform moral reasoning about them. This is not the best option, since we risk creating “improved” humans that have slightly different preferences, and so moral observations obtained using the “improved” humans will be about their preference and not ours. Nonetheless, for some general questions, such as the value of copies, I expect that the answers given by such instruments would also be true about out own preference.
Another way is of course to just create a FAI, which will necessarily be able to do moral estimation of arbitrary situations.
This is a question about moral estimation. Simple questions of moral estimation can be resolved by observing reactions of people to situations which they evolved to consider: to save vs. to eat a human baby, for example. For more difficult questions involving unusual or complicated situations, or situations involving contradicting moral pressures, we simply don’t have any means for extraction of information about their moral value. The only experimental apparatus we have are human reactions, and this apparatus has only so much resolution. Quality of theoretical analysis of observations made using this tool is also rather poor.
To move forward, we need better tools, and better theory. Both could be obtained by improving humans, by making smarter humans that can consider more detailed situations and perform moral reasoning about them. This is not the best option, since we risk creating “improved” humans that have slightly different preferences, and so moral observations obtained using the “improved” humans will be about their preference and not ours. Nonetheless, for some general questions, such as the value of copies, I expect that the answers given by such instruments would also be true about out own preference.
Another way is of course to just create a FAI, which will necessarily be able to do moral estimation of arbitrary situations.