“Tom McCabe: speaking as someone who morally disapproves of murder, I’d like to see the AI reprogram everyone back, or cryosuspend them all indefinitely, or upload them into a sub-matrix where they can think they’re happily murdering each other without all the actual murder. Of course your hypothetical murder-lovers would call this immoral, but I’m not about to start taking the moral arguments of murder-lovers seriously.”
Beware shutting yourself into a self-justifying memetic loop. If you had been born in 1800, and just recently moved here via time travel, would you have refused to listen to all of our modern anti-slavery arguments, on the grounds that no moral argument by negro-lovers could be taken seriously?
“The AI would use the previous morality to select its actions: depending on the content of that morality it might or might not reverse the reprogramming.”
Do you mean would, or should? My question was what the AI should do, not what a human-constructed AI is likely to do.
It should be possible for an AI, upon perceiving any huge changes in renormalized human morality, to scrap its existing moral system and recalibrate from scratch, even if nobody actually codes an AI that way. Obviously, the previous morality will determine the AI’s very next action, but the interesting question is whether the important actions (the ones that directly affect people) map on to a new morality or the previous morality.
“Tom McCabe: speaking as someone who morally disapproves of murder, I’d like to see the AI reprogram everyone back, or cryosuspend them all indefinitely, or upload them into a sub-matrix where they can think they’re happily murdering each other without all the actual murder. Of course your hypothetical murder-lovers would call this immoral, but I’m not about to start taking the moral arguments of murder-lovers seriously.”
Beware shutting yourself into a self-justifying memetic loop. If you had been born in 1800, and just recently moved here via time travel, would you have refused to listen to all of our modern anti-slavery arguments, on the grounds that no moral argument by negro-lovers could be taken seriously?
“The AI would use the previous morality to select its actions: depending on the content of that morality it might or might not reverse the reprogramming.”
Do you mean would, or should? My question was what the AI should do, not what a human-constructed AI is likely to do.
It should be possible for an AI, upon perceiving any huge changes in renormalized human morality, to scrap its existing moral system and recalibrate from scratch, even if nobody actually codes an AI that way. Obviously, the previous morality will determine the AI’s very next action, but the interesting question is whether the important actions (the ones that directly affect people) map on to a new morality or the previous morality.