Thanks! This reads as an incredibly sober and reasonable assessment. Like many others here, I am somewhat more worried that AGI is not far out, mostly because I don’t see any compelling reason for why developments would slow.
The reasons I think AI x-risk is unlikely also argue against Our Glorious Future coming from AGI, so I expect that there is less to be gained by not slowing AI.
I think this is an important point that is often missed by people dismissive of AI. If transformative AI is actually far off, then there is not much to worry about, but also not much to gain. So to assess the risks for going ahead, the probability that matters is that eventual powerful AI will in fact be safely controllable—not the total probability of x risk from AI.
I also like your point about opportunity costs of people working on AI. Both in labs and in response in safety efforts—this really feels like an unfortunate dynamic and makes me personally quite sad to think about.
Thanks! This reads as an incredibly sober and reasonable assessment. Like many others here, I am somewhat more worried that AGI is not far out, mostly because I don’t see any compelling reason for why developments would slow.
I think this is an important point that is often missed by people dismissive of AI. If transformative AI is actually far off, then there is not much to worry about, but also not much to gain. So to assess the risks for going ahead, the probability that matters is that eventual powerful AI will in fact be safely controllable—not the total probability of x risk from AI.
I also like your point about opportunity costs of people working on AI. Both in labs and in response in safety efforts—this really feels like an unfortunate dynamic and makes me personally quite sad to think about.