A common generator of doominess is a cluster of views that are something like “AGI is an attractor state that, following current lines of research, you will by default fall into with relatively little warning”. And this view generates doominess about timelines, takeoff speed, difficulty of solving alignment, consequences of failing to solve alignment on the first try, and difficulty of coordinating around AI risk. But I’m not sure how it generates or why it should strongly correlate with other doomy views, like:
Pessimism that warning shots will produce any positive change in behavior at all, separate from whether a response to a warning shot will be sufficient to change anything
Extreme confidence that someone, somewhere will dump lots of resources into building AGI, even in the face of serious effort to prevent this
The belief that narrow AI basically doesn’t matter at all, strategically
High confidence that the cost of compute will continue to drop on or near trend
People seem to hold these beliefs in a way that’s not explained by the first list of doomy beliefs, It’s not just that coordinating around reducing AI risk is hard because it’s a thing you make suddenly and by accident, it’s because the relevant people and institutions are incapable of such coordination. It’s not just that narrow AI won’t have time to do anything important because of short timelines, it’s that the world works in a way that makes it nearly impossible to steer in any substantial way unless you are a superintelligence.
A view like “aligning things is difficult, including AI, institutions, and civilizations” can at least partially generate this second list of views, but overall the case for strong correlations seems iffy to me. (To be clear, I put substantial credence in the attractor state thing being true and I accept at least a weak version of “aligning things is hard”.)
Well said. I might quibble with some of the details but I basically agree that the four you list here should theoretically be only mildly correlated with timelines & takeoff views, and that we should try to test how much the correlation is in practice to determine how much of a general doom factor bias people have.
A common generator of doominess is a cluster of views that are something like “AGI is an attractor state that, following current lines of research, you will by default fall into with relatively little warning”. And this view generates doominess about timelines, takeoff speed, difficulty of solving alignment, consequences of failing to solve alignment on the first try, and difficulty of coordinating around AI risk. But I’m not sure how it generates or why it should strongly correlate with other doomy views, like:
Pessimism that warning shots will produce any positive change in behavior at all, separate from whether a response to a warning shot will be sufficient to change anything
Extreme confidence that someone, somewhere will dump lots of resources into building AGI, even in the face of serious effort to prevent this
The belief that narrow AI basically doesn’t matter at all, strategically
High confidence that the cost of compute will continue to drop on or near trend
People seem to hold these beliefs in a way that’s not explained by the first list of doomy beliefs, It’s not just that coordinating around reducing AI risk is hard because it’s a thing you make suddenly and by accident, it’s because the relevant people and institutions are incapable of such coordination. It’s not just that narrow AI won’t have time to do anything important because of short timelines, it’s that the world works in a way that makes it nearly impossible to steer in any substantial way unless you are a superintelligence.
A view like “aligning things is difficult, including AI, institutions, and civilizations” can at least partially generate this second list of views, but overall the case for strong correlations seems iffy to me. (To be clear, I put substantial credence in the attractor state thing being true and I accept at least a weak version of “aligning things is hard”.)
Well said. I might quibble with some of the details but I basically agree that the four you list here should theoretically be only mildly correlated with timelines & takeoff views, and that we should try to test how much the correlation is in practice to determine how much of a general doom factor bias people have.