I think a lot of people are confusing a) improved ability to act morally, and b) improved moral wisdom.
Remember, things like “having fewer deaths, conflicts” does not mean moral progress. It’s only moral progress if people in general change their evaluation of the merit of e.g. fewer deaths, conflicts.
So it really is a difficult question Eliezer is asking: can you imagine how you would have/achieve greater moral wisdom in the future, as evaluated with your present mental faculties?
My best answer is yes, in that I can imagine being better able to discern inherent conflict between certain moral principles. Haphazard example: today, I might believe that a) assaulting people out-of-the-blue is bad, and b) credibly demonstrating ability to fend off assaulters are good. In the future, I might notice that these come into conflict, that if people value both of these, some people will inevitably have a utility function that encourages them to do a), and this is unavoidable. So then I find out more precisely how much of one comes at how much cost of the other, and that persuing certain combinations of them is impossible.
I call that moral progress. Am I right, assuming the premises?
I think a lot of people are confusing a) improved ability to act morally, and b) improved moral wisdom.
Remember, things like “having fewer deaths, conflicts” does not mean moral progress. It’s only moral progress if people in general change their evaluation of the merit of e.g. fewer deaths, conflicts.
So it really is a difficult question Eliezer is asking: can you imagine how you would have/achieve greater moral wisdom in the future, as evaluated with your present mental faculties?
My best answer is yes, in that I can imagine being better able to discern inherent conflict between certain moral principles. Haphazard example: today, I might believe that a) assaulting people out-of-the-blue is bad, and b) credibly demonstrating ability to fend off assaulters are good. In the future, I might notice that these come into conflict, that if people value both of these, some people will inevitably have a utility function that encourages them to do a), and this is unavoidable. So then I find out more precisely how much of one comes at how much cost of the other, and that persuing certain combinations of them is impossible.
I call that moral progress. Am I right, assuming the premises?
Agreed.