Thanks for contributing your views. I think it’s really important for us to understand others’ views on these topics, as this helps us have sensible conversations, faster.
Most of your conclusions are premised on AGI being a difficult project from where we are now. I think this is the majority view outside of alignment circles and AGI labs (which are different from AI labs).
My main point is that our estimate of AGI difficulty should include very short timelines. We don’t know how hard AGI might be, but we also have never known how easy it might be.
After a couple of decades studying the human brain and mind, I’m afraid we’re quite close to AGI. It looks to me like the people who think most about how to build AGI tend to think it’s easier than those who don’t. This seems important. The most accurate prediction of heavier-than-air flight would’ve come from the Wright brothers (and I believe their estimate was far longer than it actually took them). As we get closer to it, I personally think I can see the route there, and that exactly zero breakthroughs are necessary. I could easily be wrong, but it seems like expertise in how minds work probably counts somewhat in making that estimate.
I think there’s an intuition that what goes on in our heads must be magical and amazing, because we’re unique. Thinking hard about what’s required to get from AI to us makes it seem less magical and amazing. Higher cognition operates on the same principles as lower cognition. And consciousness is quite beside the point (it’s a fascinating topic; I think what we know about brain function explains it rather well, but I’m resisting getting sidetracked by that because it’s almost completely irrelevant for alignment).
I’m always amazed by people saying “well sure, current AI is at human intelligence in most areas, and has progressed quickly, but it will take forever to do that last magical bit”.
I recognize that you have a wide confidence interval and take AGI seriously even if you currently think it’s far away and not guaranteed to be important.
I just question why you seem even modestly confident of that prediction.
Again, thanks for the post! You make many excellent points. I think all of these have been addressed elsewhere, and fascinating discussions exist, mostly on LW, of most of those points.
I don’t believe that “current AI is at human intelligence in most areas”. I think that it is superhuman in a few areas, within the human range in some areas, and subhuman in many areas—especially areas where the things you’re trying to do are not well specified tasks.
I’m not sure how to weight people who think most about how to build AGI vs more general AI researchers (median says HLAI in 2059, p(Doom) 5-10%) vs forecasters more generally. There’s a difference in how much people have thought about it, but also selection bias: most people who are skeptical of AGI soon are likely not going to work in alignment circles or an AGI lab. The relevant reference class is not the Wright Brothers, since hindsight tells us that they were the ones who succeeded. One relevant reference class is the Society for the Encouragement of Aerial Locomotion by means of Heavier-than-Air Machines, founded in 1863, although I don’t know what their predictions were. It might also make sense to include many groups of futurists focusing on many potential technologies, rather than just on one technology that we know worked out.
I agree that there’s a heavy self-selection bias for those working in safety or AGI labs. So I’d say both of these factors are large, and how to balance them is unclear.
I agree that you can’t use the Wright Brothers as a reference class, because you don’t know in advance who’s going to succeed.
I do want to draw a distinction between AI researchers, who think about improving narrow ML systems, and AGI researchers. There are people who spend much more time thinking about how breakthroughs to next-level abilities might be achieved, and what a fully agentic, human-level AGI would be like. The line is fuzzy, but I’d say these two ends of a spectrum exist. I’d say the AGI researchers are more like the society for aerial locomotion. I assume that society had a much better prediction than the class of engineers who’d rarely thought about integrating their favorite technologies (sailmaking, bicycle design, internal combustion engine design) into flying machines.
Thanks for contributing your views. I think it’s really important for us to understand others’ views on these topics, as this helps us have sensible conversations, faster.
Most of your conclusions are premised on AGI being a difficult project from where we are now. I think this is the majority view outside of alignment circles and AGI labs (which are different from AI labs).
My main point is that our estimate of AGI difficulty should include very short timelines. We don’t know how hard AGI might be, but we also have never known how easy it might be.
After a couple of decades studying the human brain and mind, I’m afraid we’re quite close to AGI. It looks to me like the people who think most about how to build AGI tend to think it’s easier than those who don’t. This seems important. The most accurate prediction of heavier-than-air flight would’ve come from the Wright brothers (and I believe their estimate was far longer than it actually took them). As we get closer to it, I personally think I can see the route there, and that exactly zero breakthroughs are necessary. I could easily be wrong, but it seems like expertise in how minds work probably counts somewhat in making that estimate.
I think there’s an intuition that what goes on in our heads must be magical and amazing, because we’re unique. Thinking hard about what’s required to get from AI to us makes it seem less magical and amazing. Higher cognition operates on the same principles as lower cognition. And consciousness is quite beside the point (it’s a fascinating topic; I think what we know about brain function explains it rather well, but I’m resisting getting sidetracked by that because it’s almost completely irrelevant for alignment).
I’m always amazed by people saying “well sure, current AI is at human intelligence in most areas, and has progressed quickly, but it will take forever to do that last magical bit”.
I recognize that you have a wide confidence interval and take AGI seriously even if you currently think it’s far away and not guaranteed to be important.
I just question why you seem even modestly confident of that prediction.
Again, thanks for the post! You make many excellent points. I think all of these have been addressed elsewhere, and fascinating discussions exist, mostly on LW, of most of those points.
I don’t believe that “current AI is at human intelligence in most areas”. I think that it is superhuman in a few areas, within the human range in some areas, and subhuman in many areas—especially areas where the things you’re trying to do are not well specified tasks.
I’m not sure how to weight people who think most about how to build AGI vs more general AI researchers (median says HLAI in 2059, p(Doom) 5-10%) vs forecasters more generally. There’s a difference in how much people have thought about it, but also selection bias: most people who are skeptical of AGI soon are likely not going to work in alignment circles or an AGI lab. The relevant reference class is not the Wright Brothers, since hindsight tells us that they were the ones who succeeded. One relevant reference class is the Society for the Encouragement of Aerial Locomotion by means of Heavier-than-Air Machines, founded in 1863, although I don’t know what their predictions were. It might also make sense to include many groups of futurists focusing on many potential technologies, rather than just on one technology that we know worked out.
I agree that there’s a heavy self-selection bias for those working in safety or AGI labs. So I’d say both of these factors are large, and how to balance them is unclear.
I agree that you can’t use the Wright Brothers as a reference class, because you don’t know in advance who’s going to succeed.
I do want to draw a distinction between AI researchers, who think about improving narrow ML systems, and AGI researchers. There are people who spend much more time thinking about how breakthroughs to next-level abilities might be achieved, and what a fully agentic, human-level AGI would be like. The line is fuzzy, but I’d say these two ends of a spectrum exist. I’d say the AGI researchers are more like the society for aerial locomotion. I assume that society had a much better prediction than the class of engineers who’d rarely thought about integrating their favorite technologies (sailmaking, bicycle design, internal combustion engine design) into flying machines.