Of course, people can be crushed by impersonal markets as easily as they can by singletons. The case might be made that we would prefer a singleton because the task of controlling it would be less complex and error-prone.
A reasonable point, but I took Luke to be discussing the problems of designing a good singleton because a singleton seemed like the most likely outcome, not because he likes the singleton aesthetically or because a singleton would be easier to control.
Only one superintelligent AMA (Artificial Moral Agent) is to be constructed, and it is to take control of the entire future light cone with whatever goal function is decided upon. Justification: a singleton is the likely default outcome for superintelligence, and stable co-existence of superintelligences, if achievable, would offer no inherent advantages for humans.
I’m not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.
ETA: I have been corrected—the quotation was not from Eliezer. Also, the quote doesn’t directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
I don’t know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn’t try to perfectly represent his opinions.
Of course, people can be crushed by impersonal markets as easily as they can by singletons. The case might be made that we would prefer a singleton because the task of controlling it would be less complex and error-prone.
A reasonable point, but I took Luke to be discussing the problems of designing a good singleton because a singleton seemed like the most likely outcome, not because he likes the singleton aesthetically or because a singleton would be easier to control.
In the context of CEV, Eliezer apparently thinks that a singleton is desirable, not just likely.
I’m not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.
ETA: I have been corrected—the quotation was not from Eliezer. Also, the quote doesn’t directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
I don’t know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn’t try to perfectly represent his opinions.
No, I didn’t realize that. Thx for the correction, and sorry for the misattribution.
I have different justifications in mind, and yes I will be explaining them in the book.