It just seems like there a million things that could potentially go wrong.
Based on the five Maybes you suggested might happen, it sounds like you’re saying some AI doomers are overconfident because there are a million things that could go potentially right. But there doesn’t seem to be a good reason to expect any of those maybes to be likelihoods, and they seem more speculative (e.g. “consciousness comes online”) than the reasons well-informed AI doomers think there’s a good chance of doom this century.
I agree with you that we shouldn’t be too confident. But given how sharply capabilities research is accelerating—timelines on TAI are being updated down, not up—and in the absence of any obvious gating factor (e.g. current costs of training LMs) that seems likely to slow things down much if at all, the changeover from a world in which AI can’t doom us to one in which it can doom us might happen faster than seems intuitively possible. Here’s a quote from Richard Ngo on the 80,000 Hours podcast that I think makes this point (episode link: https://80000hours.org/podcast/episodes/richard-ngo-large-language-models/#transcript):
“I think that a lot of other problems that we’ve faced as a species have been on human timeframes, so you just have a relatively long time to react and a relatively long time to build consensus. And even if you have a few smaller incidents, then things don’t accelerate out of control.
“I think the closest thing we’ve seen to real exponential progress that people have needed to wrap their heads around on a societal level has been COVID, where people just had a lot of difficulty grasping how rapidly the virus could ramp up and how rapidly people needed to respond in order to have meaningful precautions.
“And in AI, it feels like it’s not just one system that’s developing exponentially: you’ve got this whole underlying trend of things getting more and more powerful. So we should expect that people are just going to underestimate what’s happening, and the scale and scope of what’s happening, consistently — just because our brains are not built for visualising the actual effects of fast technological progress or anything near exponential growth in terms of the effects on the world.”
I’m not saying Richard is an “AI doomer”, but hopefully this helps explain why some researchers think there’s a good chance we’ll make AI that can ruin the future within the next 50 years.
Based on the five Maybes you suggested might happen, it sounds like you’re saying some AI doomers are overconfident because there are a million things that could go potentially right. But there doesn’t seem to be a good reason to expect any of those maybes to be likelihoods, and they seem more speculative (e.g. “consciousness comes online”) than the reasons well-informed AI doomers think there’s a good chance of doom this century.
PS I also have no qualifications on this.
I agree with you that we shouldn’t be too confident. But given how sharply capabilities research is accelerating—timelines on TAI are being updated down, not up—and in the absence of any obvious gating factor (e.g. current costs of training LMs) that seems likely to slow things down much if at all, the changeover from a world in which AI can’t doom us to one in which it can doom us might happen faster than seems intuitively possible. Here’s a quote from Richard Ngo on the 80,000 Hours podcast that I think makes this point (episode link: https://80000hours.org/podcast/episodes/richard-ngo-large-language-models/#transcript):
“I think that a lot of other problems that we’ve faced as a species have been on human timeframes, so you just have a relatively long time to react and a relatively long time to build consensus. And even if you have a few smaller incidents, then things don’t accelerate out of control.
“I think the closest thing we’ve seen to real exponential progress that people have needed to wrap their heads around on a societal level has been COVID, where people just had a lot of difficulty grasping how rapidly the virus could ramp up and how rapidly people needed to respond in order to have meaningful precautions.
“And in AI, it feels like it’s not just one system that’s developing exponentially: you’ve got this whole underlying trend of things getting more and more powerful. So we should expect that people are just going to underestimate what’s happening, and the scale and scope of what’s happening, consistently — just because our brains are not built for visualising the actual effects of fast technological progress or anything near exponential growth in terms of the effects on the world.”
I’m not saying Richard is an “AI doomer”, but hopefully this helps explain why some researchers think there’s a good chance we’ll make AI that can ruin the future within the next 50 years.