Of course there aren’t. You can trivially imagine programming a computer to print, “2+2=5” and no verbal argument will persuade it to give the correct answer—this is basically Eliezer’s example! He also says that, in principle, an argument might persuade all the people we care about.
While his point about evolution and ‘psychological unity’ seems less clear than I remembered, he does explicitly say elsewhere that moral arguments have a point. You should assign a high prior probability to a given human sharing enough of your values to make argument worthwhile (assuming various optimistic points about argumentation in general with this person). As for me, I do think that moral questions which once provoked actual war can be settled for nearly all humans. I think logic and evidence play a major part in this. I also think it wouldn’t take much of either to get nearly all humans to endorse, eg, the survival of humanity—if you think that part’s unimportant, you may be forgetting Eliezer’s goal (and in the abstract, you may be thinking of a narrower range of possible minds).
One thing which seems inherently right to me is that there would be an objective morality, it just happens to be apparently false in this universe
How could it be true, aside from a stronger version of the previous paragraph? I don’t know if I understand what you want.
Of course there aren’t. You can trivially imagine programming a computer to print, “2+2=5” and no verbal argument will persuade it to give the correct answer -
You can’t persuade rocks either. Don’t you think this might be just a wee bit of a strawman of the views of people who believe in universally compelling arguments?
Of course there aren’t. You can trivially imagine programming a computer to print, “2+2=5” and no verbal argument will persuade it to give the correct answer—this is basically Eliezer’s example! He also says that, in principle, an argument might persuade all the people we care about.
While his point about evolution and ‘psychological unity’ seems less clear than I remembered, he does explicitly say elsewhere that moral arguments have a point. You should assign a high prior probability to a given human sharing enough of your values to make argument worthwhile (assuming various optimistic points about argumentation in general with this person). As for me, I do think that moral questions which once provoked actual war can be settled for nearly all humans. I think logic and evidence play a major part in this. I also think it wouldn’t take much of either to get nearly all humans to endorse, eg, the survival of humanity—if you think that part’s unimportant, you may be forgetting Eliezer’s goal (and in the abstract, you may be thinking of a narrower range of possible minds).
How could it be true, aside from a stronger version of the previous paragraph? I don’t know if I understand what you want.
You can’t persuade rocks either. Don’t you think this might be just a wee bit of a strawman of the views of people who believe in universally compelling arguments?