But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning (...) I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
I don’t think there is a consensus, just a belief in a consensus. EY seems unable or unwiing to clarify his posiition even when asked directly.