Here’s one disagreement/uncertainty I have on some of it:
Both of the “What failure looks like” posts (yours and Pauls) posts present failures that essentially seem like coordination, intelligence, and oversight failures. I think it’s very possible (maybe 30-46%+?) that pre-TAI AI systems will effectively solve the required coordination and intelligence issues.
For example, I could easily imagine worlds where AI-enhanced epistemic environment make low-risk solutions crystal clear to key decision-makers.
In general, the combination of AI plus epistemics, pre-TAI, seems very high-variance to me. It could go very positively, or very poorly.
This consideration isn’t enough to change p(doom) under 10%, but I’m probably be closer to 50% than you would be. (Right now, maybe 40% or so).
That said, this really isn’t a big difference, it’s less than one order of magnitude.
I liked this a lot, thanks for sharing.
Here’s one disagreement/uncertainty I have on some of it:
Both of the “What failure looks like” posts (yours and Pauls) posts present failures that essentially seem like coordination, intelligence, and oversight failures. I think it’s very possible (maybe 30-46%+?) that pre-TAI AI systems will effectively solve the required coordination and intelligence issues.
For example, I could easily imagine worlds where AI-enhanced epistemic environment make low-risk solutions crystal clear to key decision-makers.
In general, the combination of AI plus epistemics, pre-TAI, seems very high-variance to me. It could go very positively, or very poorly.
This consideration isn’t enough to change p(doom) under 10%, but I’m probably be closer to 50% than you would be. (Right now, maybe 40% or so).
That said, this really isn’t a big difference, it’s less than one order of magnitude.