A long time ago, a different person who also happens to be named “Eliezer Yudkowsky” said that, in the event of a clash between human beings and superintelligent AIs, he would side with the latter. The Yudkowsky we all know rejects this position, though it is not clear to me why.
Not clear why? Because he likes people and doesn’t want everyone he knows (including himself), everyone he doesn’t know and any potential descendants of either to die? Doesn’t that sound like a default position? Most people don’t want themselves to go extinct.
Not clear why? Because he likes people and doesn’t want everyone he knows (including himself), everyone he doesn’t know and any potential descendants of either to die? Doesn’t that sound like a default position? Most people don’t want themselves to go extinct.