I’m not aware of anyone trying to work on that problem (but I don’t follow academic philosophy so for all I know there’s lots of relevant stuff even before my post).
It’s still at the top of my list of problems in moral philosophy.
The most natural other question of similar importance is how nice we should be to other humans, e.g. how we should prioritize actions that involve leaving us better off and others worse off (either people different from us, people similar to us, governments that don’t represent their constituents well, etc.). Neither of those questions is a single simple question (though the AI one feels more like a single simple question since it has so many aspects so different from what people normally think about), they are big clouds of questions that feel kind of core to the whole project of moral philosophy.
(Obviously all of that is coming from a very consequentialist perspective, such that these questions involve a distinctive-to-consequentialists mix of axiology, decision theory, and understanding how moral intuitions relate to both.)
I’m not aware of anyone trying to work on that problem (but I don’t follow academic philosophy so for all I know there’s lots of relevant stuff even before my post).
It’s still at the top of my list of problems in moral philosophy.
The most natural other question of similar importance is how nice we should be to other humans, e.g. how we should prioritize actions that involve leaving us better off and others worse off (either people different from us, people similar to us, governments that don’t represent their constituents well, etc.). Neither of those questions is a single simple question (though the AI one feels more like a single simple question since it has so many aspects so different from what people normally think about), they are big clouds of questions that feel kind of core to the whole project of moral philosophy.
(Obviously all of that is coming from a very consequentialist perspective, such that these questions involve a distinctive-to-consequentialists mix of axiology, decision theory, and understanding how moral intuitions relate to both.)
If anyone’s interested, I took a crack at writing down a good successor criterion.