I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI. I don’t think a moral anti-realist is likely to think an AGI can be friendly to me and to Aristotle. It might not even be possible to be friendly to me and any other person.
I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI. I don’t think a moral anti-realist is likely to think an AGI can be friendly to me and to Aristotle. It might not even be possible to be friendly to me and any other person.
Well that seems like the most dangerous instance of motivated cognition ever.
It seems like an issue that’s important to get right. Is there a test we could run to see whether it’s true?
Yes, but only once. ;)
Did you mean to link to this comment?
Thanks, fixed.