My numbers for the last two questions were higher than I expected them to be—I have previously said things like “If Ajeya and Paul are right about timelines, we’re probably OK.” I still think that. But if I condition on AGI happening in 20 years, that doesn’t mean conditioning on Ajeya and Paul being right about timelines… To get more concrete: If I had Ajeya’s timelines, my overall p(doom) would be <50%. But instead I have much shorter timelines. When I condition on AGI happening in 2042, part of my probability mass goes to worlds where Ajeya was basically right, but part of my probability mass goes to “weird” worlds, e.g. worlds where there’s a scary AI accident in 2025 and AI gets successfully banned for 15 years until North Korea builds it. It’s hard to say how much doom there is in weird worlds like this but it feels higher than in Ajeya’s median world, because the cost of making AI is lower and there’s more overhang and takeoff speeds are faster.
(I should add that while I’m fairly confident that existential risk from AI is very high conditional on it arriving soon, I am very unconfident in my p(doom) conditional on AI arriving later.)
As I was reading this, I remembered that we had a conversation about your timelines about a year ago, I think. If I recall correctly they were already short (~50% before 2030?). Have they dropped further since then?
My numbers for the last two questions were higher than I expected them to be—I have previously said things like “If Ajeya and Paul are right about timelines, we’re probably OK.” I still think that. But if I condition on AGI happening in 20 years, that doesn’t mean conditioning on Ajeya and Paul being right about timelines… To get more concrete: If I had Ajeya’s timelines, my overall p(doom) would be <50%. But instead I have much shorter timelines. When I condition on AGI happening in 2042, part of my probability mass goes to worlds where Ajeya was basically right, but part of my probability mass goes to “weird” worlds, e.g. worlds where there’s a scary AI accident in 2025 and AI gets successfully banned for 15 years until North Korea builds it. It’s hard to say how much doom there is in weird worlds like this but it feels higher than in Ajeya’s median world, because the cost of making AI is lower and there’s more overhang and takeoff speeds are faster.
(I should add that while I’m fairly confident that existential risk from AI is very high conditional on it arriving soon, I am very unconfident in my p(doom) conditional on AI arriving later.)
As I was reading this, I remembered that we had a conversation about your timelines about a year ago, I think. If I recall correctly they were already short (~50% before 2030?). Have they dropped further since then?
Yep! Just a few years shorter.