At least one concern I’d have with worries about moral realism and AI alignment is that I think the case for moral realism is much weaker than may be popularly believed in some circles. While a majority of analytic philosophers responding to the 2020 PhilPapers survey favor moral realism (62% favor realism, 26% favor antirealism), I take this more as evidence of problems in the field than an especially good reason to think the “experts” are onto something.
More importantly, I don’t think moral realists have any good arguments for their position. Yet the view that moral realism isn’t just not true, but that it has very little going for it, doesn’t strike me as very popular. I am hoping that can change. If moral realism is as implausible as I think it is (or if it isn’t even intelligible), this should reduce our worries about it creating complications with AGI.
At least one concern I’d have with worries about moral realism and AI alignment is that I think the case for moral realism is much weaker than may be popularly believed in some circles. While a majority of analytic philosophers responding to the 2020 PhilPapers survey favor moral realism (62% favor realism, 26% favor antirealism), I take this more as evidence of problems in the field than an especially good reason to think the “experts” are onto something.
More importantly, I don’t think moral realists have any good arguments for their position. Yet the view that moral realism isn’t just not true, but that it has very little going for it, doesn’t strike me as very popular. I am hoping that can change. If moral realism is as implausible as I think it is (or if it isn’t even intelligible), this should reduce our worries about it creating complications with AGI.