I’m with you in spirit but in practice I feel the need to point out that timelines, takeoff speeds, and p(doom) really should be heavily correlated. I’m actually a bit surprised people’s views on them aren’t more correlated than they already are. (See this poll and this metaculus question.)
Slower takeoff causes shorter timelines, via R&D acceleration. Moreover slower takeoff correlates with longer timelines, because timelines are mainly a function of training requirements (a.k.a. AGI difficulty) and the higher the training requirements the more gradually we’ll cross the distance from here to there. And shorter timelines and faster takeoff both make p(doom) higher in a bunch of obvious ways—less time for the world to prepare, less time for the world to react, less time for failures/mistakes/bugs to be corrected, etc.
A common generator of doominess is a cluster of views that are something like “AGI is an attractor state that, following current lines of research, you will by default fall into with relatively little warning”. And this view generates doominess about timelines, takeoff speed, difficulty of solving alignment, consequences of failing to solve alignment on the first try, and difficulty of coordinating around AI risk. But I’m not sure how it generates or why it should strongly correlate with other doomy views, like:
Pessimism that warning shots will produce any positive change in behavior at all, separate from whether a response to a warning shot will be sufficient to change anything
Extreme confidence that someone, somewhere will dump lots of resources into building AGI, even in the face of serious effort to prevent this
The belief that narrow AI basically doesn’t matter at all, strategically
High confidence that the cost of compute will continue to drop on or near trend
People seem to hold these beliefs in a way that’s not explained by the first list of doomy beliefs, It’s not just that coordinating around reducing AI risk is hard because it’s a thing you make suddenly and by accident, it’s because the relevant people and institutions are incapable of such coordination. It’s not just that narrow AI won’t have time to do anything important because of short timelines, it’s that the world works in a way that makes it nearly impossible to steer in any substantial way unless you are a superintelligence.
A view like “aligning things is difficult, including AI, institutions, and civilizations” can at least partially generate this second list of views, but overall the case for strong correlations seems iffy to me. (To be clear, I put substantial credence in the attractor state thing being true and I accept at least a weak version of “aligning things is hard”.)
Well said. I might quibble with some of the details but I basically agree that the four you list here should theoretically be only mildly correlated with timelines & takeoff views, and that we should try to test how much the correlation is in practice to determine how much of a general doom factor bias people have.
No. I think it is indeed the usual wisdom that slower takeoff causes shorter timeline. I found Paul Christiano’s argument linked in the article pretty convincing.
I’m with you in spirit but in practice I feel the need to point out that timelines, takeoff speeds, and p(doom) really should be heavily correlated. I’m actually a bit surprised people’s views on them aren’t more correlated than they already are. (See this poll and this metaculus question.)
Slower takeoff causes shorter timelines, via R&D acceleration. Moreover slower takeoff correlates with longer timelines, because timelines are mainly a function of training requirements (a.k.a. AGI difficulty) and the higher the training requirements the more gradually we’ll cross the distance from here to there.
And shorter timelines and faster takeoff both make p(doom) higher in a bunch of obvious ways—less time for the world to prepare, less time for the world to react, less time for failures/mistakes/bugs to be corrected, etc.
A common generator of doominess is a cluster of views that are something like “AGI is an attractor state that, following current lines of research, you will by default fall into with relatively little warning”. And this view generates doominess about timelines, takeoff speed, difficulty of solving alignment, consequences of failing to solve alignment on the first try, and difficulty of coordinating around AI risk. But I’m not sure how it generates or why it should strongly correlate with other doomy views, like:
Pessimism that warning shots will produce any positive change in behavior at all, separate from whether a response to a warning shot will be sufficient to change anything
Extreme confidence that someone, somewhere will dump lots of resources into building AGI, even in the face of serious effort to prevent this
The belief that narrow AI basically doesn’t matter at all, strategically
High confidence that the cost of compute will continue to drop on or near trend
People seem to hold these beliefs in a way that’s not explained by the first list of doomy beliefs, It’s not just that coordinating around reducing AI risk is hard because it’s a thing you make suddenly and by accident, it’s because the relevant people and institutions are incapable of such coordination. It’s not just that narrow AI won’t have time to do anything important because of short timelines, it’s that the world works in a way that makes it nearly impossible to steer in any substantial way unless you are a superintelligence.
A view like “aligning things is difficult, including AI, institutions, and civilizations” can at least partially generate this second list of views, but overall the case for strong correlations seems iffy to me. (To be clear, I put substantial credence in the attractor state thing being true and I accept at least a weak version of “aligning things is hard”.)
Well said. I might quibble with some of the details but I basically agree that the four you list here should theoretically be only mildly correlated with timelines & takeoff views, and that we should try to test how much the correlation is in practice to determine how much of a general doom factor bias people have.
Uh, unless I’m quite confused, you mean faster takeoff causes shorter timelines. Right?
No. I think it is indeed the usual wisdom that slower takeoff causes shorter timeline. I found Paul Christiano’s argument linked in the article pretty convincing.
Correct.