Do you think an AI reasoning about ethics would be capable of coming to your conclusions? And what “superintelligence policy” do you think it would recommend?
FAI is supposed to come to whatever conclusions we would like it to come to (if we knew better etc.). It’s not supposed to specify the whole of human value ahead of time, it’s supposed to ensure that the FAI extrapolates the right stuff.
Do you think an AI reasoning about ethics would be capable of coming to your conclusions? And what “superintelligence policy” do you think it would recommend?
I’m pretty sure that FAI+CEV is supposed to prevent exactly this scenario, in which an AI is allowed to come to its own, non-foreordained conclusions
FAI is supposed to come to whatever conclusions we would like it to come to (if we knew better etc.). It’s not supposed to specify the whole of human value ahead of time, it’s supposed to ensure that the FAI extrapolates the right stuff.