“[...] I give different numbers on different days. Sometimes that’s because I’ve considered new evidence, but normally it’s just because these numbers are just an imprecise quantification of my belief that changes from day to day.”
If the tweet is credible, I am curious if this difference in p(doom) is due to the day-to-day fluctuations of your belief, or have you considered new evidence and your initial belief that p(doom) < 20% is outdated?
I clarified my views here because people kept misunderstanding or misquoting them.
The grandparent describes my probability that humans irreversibly lose control of AI systems, which I’m still guessing at 10-20%. I should probably think harder about this at some point and revise it, I have no idea which direction it will move.
I think the tweet you linked is referring to the probability for “humanity irreversibly messes up our future within 10 years of building human-level AI.” (It’s presented as “probability of AI killing everyone” which is not really right.)
I generally don’t know what people mean when they say p(doom). I think they probably imagine that the vast majority of existential risk from AI comes from loss of control, and that catastrophic loss of control necessarily leads to extinction, both of which seem hard to defend.
This recent tweet claims that your current p(doom) is 50%.
In another post, you mentioned:
If the tweet is credible, I am curious if this difference in p(doom) is due to the day-to-day fluctuations of your belief, or have you considered new evidence and your initial belief that p(doom) < 20% is outdated?
I clarified my views here because people kept misunderstanding or misquoting them.
The grandparent describes my probability that humans irreversibly lose control of AI systems, which I’m still guessing at 10-20%. I should probably think harder about this at some point and revise it, I have no idea which direction it will move.
I think the tweet you linked is referring to the probability for “humanity irreversibly messes up our future within 10 years of building human-level AI.” (It’s presented as “probability of AI killing everyone” which is not really right.)
I generally don’t know what people mean when they say p(doom). I think they probably imagine that the vast majority of existential risk from AI comes from loss of control, and that catastrophic loss of control necessarily leads to extinction, both of which seem hard to defend.