… but relative to simply cooperating, it seems a clear win. Unless the superhappies have thought of it and planned a response.
Of course, the corollary for the real world would seem to be: those people who think that most people would not converge if “extrapolated” by Eliezer’s CEV ought to exterminate other people who they disagree with on moral questions before the AI is strong enough to stop them, if Eliezer has not programmed the AI to do something to punish that sort of thing.
Hmm. That doesn’t seem so intuitively nice. I wonder if it’s just a quantitative difference between the scenarios (eg quantity of moral divergence), or a qualitative one (eg. the babykillers are bad enough to justifiably be killed in the first place).
… but relative to simply cooperating, it seems a clear win. Unless the superhappies have thought of it and planned a response.
Of course, the corollary for the real world would seem to be: those people who think that most people would not converge if “extrapolated” by Eliezer’s CEV ought to exterminate other people who they disagree with on moral questions before the AI is strong enough to stop them, if Eliezer has not programmed the AI to do something to punish that sort of thing.
Hmm. That doesn’t seem so intuitively nice. I wonder if it’s just a quantitative difference between the scenarios (eg quantity of moral divergence), or a qualitative one (eg. the babykillers are bad enough to justifiably be killed in the first place).