I think the “safety” problems (let’s call them FAI for the moment) will be harder than AI, and the philosophical problems we would need to address to decide what we ought to do will be more difficult than FAI. I see plenty of concern in LW and other futurist communities about AI “safety”, but approximately none about how to decide what the right thing to do is. “Preserving human values” is very possibly incoherent, and if it is coherent, preserving humans may be incompatible with it.
I think the “safety” problems (let’s call them FAI for the moment) will be harder than AI, and the philosophical problems we would need to address to decide what we ought to do will be more difficult than FAI. I see plenty of concern in LW and other futurist communities about AI “safety”, but approximately none about how to decide what the right thing to do is. “Preserving human values” is very possibly incoherent, and if it is coherent, preserving humans may be incompatible with it.