Only one superintelligent AMA (Artificial Moral Agent) is to be constructed, and it is to take control of the entire future light cone with whatever goal function is decided upon. Justification: a singleton is the likely default outcome for superintelligence, and stable co-existence of superintelligences, if achievable, would offer no inherent advantages for humans.
I’m not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.
ETA: I have been corrected—the quotation was not from Eliezer. Also, the quote doesn’t directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
I don’t know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn’t try to perfectly represent his opinions.
In the context of CEV, Eliezer apparently thinks that a singleton is desirable, not just likely.
I’m not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.
ETA: I have been corrected—the quotation was not from Eliezer. Also, the quote doesn’t directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
I don’t know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn’t try to perfectly represent his opinions.
No, I didn’t realize that. Thx for the correction, and sorry for the misattribution.
I have different justifications in mind, and yes I will be explaining them in the book.