I’m more sympathetic to this argument (which is a claim about what might happen in the future, as opposed to what is happening now, which is the analogy I usually encounter, though possibly not on LessWrong). I still think the analogy breaks down, though in different ways:
There is a strong norm of openness in AI research (though that might be changing). (Though perhaps this was the case with nuclear physics too.)
There is a strong anti-government / anti-military ethic in the AI research community. I’m not sure what the nuclear analog is, but I’m guessing it was neutral or pro-government/military.
Governments are staying a mile away from AGI; their interest in AI is in narrow AI’s applications. Narrow AI applications are diverse, and many can be done by a huge number of people. In contrast, nukes are a single technology, governments were interested in them, and only a few people could plausibly build them. (This is relevant if you think a ton of narrow AI could be used to take over the world economically.)
OpenAI / DeepMind are not adversarial towards each other. In contrast, US / Germany were definitely adversarial.
Assuming you agree that people are already pushing too hard for progress in AGI capability (relative to what’s ideal from a longtermist perspective), I think the current motivations for that are mostly things like money, prestige, scientific curiosity, wanting to make the world a better place (in a misguided/shorttermist way), etc., and not so much wanting to take over the world or to defend against such attempts. This seems likely to persist in the near future, but my concern is that if AGI research gets sufficiently close to fruition, governments will inevitably get involved and start pushing it even harder due to national security considerations. (Recall that Manhattan Project started 8 years before detonation of the first nuke.) Your argument seems more about what’s happening now, and does not really address this concern.
you agree that people are already pushing too hard for progress in AGI capability (relative to what’s ideal from a longtermist perspective)
I’m uncertain, given the potential for AGI to be used to reduce other x-risks. (I don’t have strong opinions on how large other x-risks are and how much potential there is for AGI to differentially help.) But I’m happy to accept this as a premise.
Your argument seems more about what’s happening now, and does not really address this concern.
I think what’s happening now is a good guide into what will happen in the future, at least on short timelines. If AGI is >100 years away, then sure, a lot will change and current facts are relatively unimportant. If it’s < 20 years away, then current facts seem very relevant. I usually focus on the shorter timelines.
For min(20 years, time till AGI), for each individual trend I identified, I’d weakly predict that trend will continue (except perhaps openness, because that’s already changing).
I’m more sympathetic to this argument (which is a claim about what might happen in the future, as opposed to what is happening now, which is the analogy I usually encounter, though possibly not on LessWrong). I still think the analogy breaks down, though in different ways:
There is a strong norm of openness in AI research (though that might be changing). (Though perhaps this was the case with nuclear physics too.)
There is a strong anti-government / anti-military ethic in the AI research community. I’m not sure what the nuclear analog is, but I’m guessing it was neutral or pro-government/military.
Governments are staying a mile away from AGI; their interest in AI is in narrow AI’s applications. Narrow AI applications are diverse, and many can be done by a huge number of people. In contrast, nukes are a single technology, governments were interested in them, and only a few people could plausibly build them. (This is relevant if you think a ton of narrow AI could be used to take over the world economically.)
OpenAI / DeepMind are not adversarial towards each other. In contrast, US / Germany were definitely adversarial.
Assuming you agree that people are already pushing too hard for progress in AGI capability (relative to what’s ideal from a longtermist perspective), I think the current motivations for that are mostly things like money, prestige, scientific curiosity, wanting to make the world a better place (in a misguided/shorttermist way), etc., and not so much wanting to take over the world or to defend against such attempts. This seems likely to persist in the near future, but my concern is that if AGI research gets sufficiently close to fruition, governments will inevitably get involved and start pushing it even harder due to national security considerations. (Recall that Manhattan Project started 8 years before detonation of the first nuke.) Your argument seems more about what’s happening now, and does not really address this concern.
I’m uncertain, given the potential for AGI to be used to reduce other x-risks. (I don’t have strong opinions on how large other x-risks are and how much potential there is for AGI to differentially help.) But I’m happy to accept this as a premise.
I think what’s happening now is a good guide into what will happen in the future, at least on short timelines. If AGI is >100 years away, then sure, a lot will change and current facts are relatively unimportant. If it’s < 20 years away, then current facts seem very relevant. I usually focus on the shorter timelines.
For min(20 years, time till AGI), for each individual trend I identified, I’d weakly predict that trend will continue (except perhaps openness, because that’s already changing).