in the minds of people like Eliezer Yudkowsky or Paul Christiano, we’re more likely doomed than not
My impression for Paul is the opposite – he guesses “~15% on singularity by 2030 and ~40% on singularity by 2040”, and has said “quantitatively my risk of losing control of the universe though this channel [Eliezer’s list of lethalities] is more like 20% than 99.99%, and I think extinction is a bit less less likely still”. (That said I think he’d probably agree with all the reasons you stated under “I personally lean towards those latter views”.) Curious to know where you got the impression that Paul thinks we’re more likely doomed than not; I’d update more on his predictions than on nearly anyone else’s, including Eliezer’s.
My view of PC’s P(Doom) came from (IIRC) Scott Alexander’s posts on Christiano vs Yudkowsky, where I remember a Christiano quote saying that although he imagines there’ll be multiple AI competing as opposed to one emerging through a singularity, this would possibly be a worse outcome because it’d be much harder to control. From that, I concluded “Christiano thinks P(doom) > 50%”, which I realize is pretty sloppy reasoning. I will go back to those articles to check whether I misrepresented his views. For now I’ll remove his name from the post 👌🏻
My impression for Paul is the opposite – he guesses “~15% on singularity by 2030 and ~40% on singularity by 2040”, and has said “quantitatively my risk of losing control of the universe though this channel [Eliezer’s list of lethalities] is more like 20% than 99.99%, and I think extinction is a bit less less likely still”. (That said I think he’d probably agree with all the reasons you stated under “I personally lean towards those latter views”.) Curious to know where you got the impression that Paul thinks we’re more likely doomed than not; I’d update more on his predictions than on nearly anyone else’s, including Eliezer’s.
My view of PC’s P(Doom) came from (IIRC) Scott Alexander’s posts on Christiano vs Yudkowsky, where I remember a Christiano quote saying that although he imagines there’ll be multiple AI competing as opposed to one emerging through a singularity, this would possibly be a worse outcome because it’d be much harder to control. From that, I concluded “Christiano thinks P(doom) > 50%”, which I realize is pretty sloppy reasoning.
I will go back to those articles to check whether I misrepresented his views. For now I’ll remove his name from the post 👌🏻
You might have confused “singularity” and “a singleton” (that is, a single AI (or someone using AI) getting control of the world)?