I agree that there’s a heavy self-selection bias for those working in safety or AGI labs. So I’d say both of these factors are large, and how to balance them is unclear.
I agree that you can’t use the Wright Brothers as a reference class, because you don’t know in advance who’s going to succeed.
I do want to draw a distinction between AI researchers, who think about improving narrow ML systems, and AGI researchers. There are people who spend much more time thinking about how breakthroughs to next-level abilities might be achieved, and what a fully agentic, human-level AGI would be like. The line is fuzzy, but I’d say these two ends of a spectrum exist. I’d say the AGI researchers are more like the society for aerial locomotion. I assume that society had a much better prediction than the class of engineers who’d rarely thought about integrating their favorite technologies (sailmaking, bicycle design, internal combustion engine design) into flying machines.
I agree that there’s a heavy self-selection bias for those working in safety or AGI labs. So I’d say both of these factors are large, and how to balance them is unclear.
I agree that you can’t use the Wright Brothers as a reference class, because you don’t know in advance who’s going to succeed.
I do want to draw a distinction between AI researchers, who think about improving narrow ML systems, and AGI researchers. There are people who spend much more time thinking about how breakthroughs to next-level abilities might be achieved, and what a fully agentic, human-level AGI would be like. The line is fuzzy, but I’d say these two ends of a spectrum exist. I’d say the AGI researchers are more like the society for aerial locomotion. I assume that society had a much better prediction than the class of engineers who’d rarely thought about integrating their favorite technologies (sailmaking, bicycle design, internal combustion engine design) into flying machines.