Excellent, thanks! I was pretty confident that some other iterations of something like these ideas must be out there. Will read and incorporate this (and get back to you in a couple days).
Actually, I kind of forgot what ended up in the paper, but then I remembered so wanted to update my comment.
There was an early draft of this paper that talked about deontology, but because there are so many different forms of deontology it was hard to come up with arguments where there wasn’t some version of deontological reasoning that broke the argument, so I instead switched to talking about the question of moral facts independent of ethical system. That said, the argument I make in the paper suggesting that moral realism is more dangerous than moral antirealism or nihilism to assume is quite similar to the concerns with deontology. Namely, if an AI assumes an ethical system can be made up of rules, then it will fail in the case where no set of rules can capture the best ethics for humans, so poses a risk of false positives among deontological AI.
Hopefully the arguments about moral facts are still useful, and you might find the style of argumentation useful to your purposes.
Excellent, thanks! I was pretty confident that some other iterations of something like these ideas must be out there. Will read and incorporate this (and get back to you in a couple days).
Actually, I kind of forgot what ended up in the paper, but then I remembered so wanted to update my comment.
There was an early draft of this paper that talked about deontology, but because there are so many different forms of deontology it was hard to come up with arguments where there wasn’t some version of deontological reasoning that broke the argument, so I instead switched to talking about the question of moral facts independent of ethical system. That said, the argument I make in the paper suggesting that moral realism is more dangerous than moral antirealism or nihilism to assume is quite similar to the concerns with deontology. Namely, if an AI assumes an ethical system can be made up of rules, then it will fail in the case where no set of rules can capture the best ethics for humans, so poses a risk of false positives among deontological AI.
Hopefully the arguments about moral facts are still useful, and you might find the style of argumentation useful to your purposes.