I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specifies will not be met by 2100?”
This could happen due to any of the following non-mutually exclusive reasons:
1. Global catastrophe before the condition is met that makes it so that people are no longer thinking about AI safety (e.g. human extinction or end of civilization): I think there’s a 50% chance
2. Condition is met sometime after the timeframe (mostly, I’m imagining that AI progress is slower than I expect): 5%
3. AGI succeeds despite the condition not being met: 30%
4. There’s some huge paradigm shift that makes AI safety concerns irrelevant—maybe most people are convinced that we’ll never build AGI, or our focus shifts from AGI to some other technology: 10%
5. Some other reason: 20%
I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 60% chance that the condition would not be met by 2100.
Biggest difference is that I estimate the risk of this kind of global catastrophe before development of AGI and before 2100 to be much lower—not sure exactly what but 10% seems like the right ballpark. But this did cause me to update towards having more probability on >2100.
I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specifies will not be met by 2100?”
This could happen due to any of the following non-mutually exclusive reasons:
1. Global catastrophe before the condition is met that makes it so that people are no longer thinking about AI safety (e.g. human extinction or end of civilization): I think there’s a 50% chance
2. Condition is met sometime after the timeframe (mostly, I’m imagining that AI progress is slower than I expect): 5%
3. AGI succeeds despite the condition not being met: 30%
4. There’s some huge paradigm shift that makes AI safety concerns irrelevant—maybe most people are convinced that we’ll never build AGI, or our focus shifts from AGI to some other technology: 10%
5. Some other reason: 20%
I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 60% chance that the condition would not be met by 2100.
Biggest difference is that I estimate the risk of this kind of global catastrophe before development of AGI and before 2100 to be much lower—not sure exactly what but 10% seems like the right ballpark. But this did cause me to update towards having more probability on >2100.