It’s also worth noting that more than one person thinks the singleton wouldn’t exist and alternative models are more likely. For example, Robin Hanson’s em model (crack of a future dawn) is fairly likely given that we have a decent Whole Brain Emulation Roadmap, but nothing of the sort for a synthetic AI, and people like Nick Szabo emphatically disagree that a single agent could outperform a market of agents.
Of course, people can be crushed by impersonal markets as easily as they can by singletons. The case might be made that we would prefer a singleton because the task of controlling it would be less complex and error-prone.
A reasonable point, but I took Luke to be discussing the problems of designing a good singleton because a singleton seemed like the most likely outcome, not because he likes the singleton aesthetically or because a singleton would be easier to control.
Only one superintelligent AMA (Artificial Moral Agent) is to be constructed, and it is to take control of the entire future light cone with whatever goal function is decided upon. Justification: a singleton is the likely default outcome for superintelligence, and stable co-existence of superintelligences, if achievable, would offer no inherent advantages for humans.
I’m not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.
ETA: I have been corrected—the quotation was not from Eliezer. Also, the quote doesn’t directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
I don’t know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn’t try to perfectly represent his opinions.
Indeed I shall.
It’s also worth noting that more than one person thinks the singleton wouldn’t exist and alternative models are more likely. For example, Robin Hanson’s em model (crack of a future dawn) is fairly likely given that we have a decent Whole Brain Emulation Roadmap, but nothing of the sort for a synthetic AI, and people like Nick Szabo emphatically disagree that a single agent could outperform a market of agents.
Of course, people can be crushed by impersonal markets as easily as they can by singletons. The case might be made that we would prefer a singleton because the task of controlling it would be less complex and error-prone.
A reasonable point, but I took Luke to be discussing the problems of designing a good singleton because a singleton seemed like the most likely outcome, not because he likes the singleton aesthetically or because a singleton would be easier to control.
In the context of CEV, Eliezer apparently thinks that a singleton is desirable, not just likely.
I’m not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.
ETA: I have been corrected—the quotation was not from Eliezer. Also, the quote doesn’t directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
I don’t know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn’t try to perfectly represent his opinions.
No, I didn’t realize that. Thx for the correction, and sorry for the misattribution.
I have different justifications in mind, and yes I will be explaining them in the book.
Yup, thanks.