Assuming you agree that people are already pushing too hard for progress in AGI capability (relative to what’s ideal from a longtermist perspective), I think the current motivations for that are mostly things like money, prestige, scientific curiosity, wanting to make the world a better place (in a misguided/shorttermist way), etc., and not so much wanting to take over the world or to defend against such attempts. This seems likely to persist in the near future, but my concern is that if AGI research gets sufficiently close to fruition, governments will inevitably get involved and start pushing it even harder due to national security considerations. (Recall that Manhattan Project started 8 years before detonation of the first nuke.) Your argument seems more about what’s happening now, and does not really address this concern.
you agree that people are already pushing too hard for progress in AGI capability (relative to what’s ideal from a longtermist perspective)
I’m uncertain, given the potential for AGI to be used to reduce other x-risks. (I don’t have strong opinions on how large other x-risks are and how much potential there is for AGI to differentially help.) But I’m happy to accept this as a premise.
Your argument seems more about what’s happening now, and does not really address this concern.
I think what’s happening now is a good guide into what will happen in the future, at least on short timelines. If AGI is >100 years away, then sure, a lot will change and current facts are relatively unimportant. If it’s < 20 years away, then current facts seem very relevant. I usually focus on the shorter timelines.
For min(20 years, time till AGI), for each individual trend I identified, I’d weakly predict that trend will continue (except perhaps openness, because that’s already changing).
Assuming you agree that people are already pushing too hard for progress in AGI capability (relative to what’s ideal from a longtermist perspective), I think the current motivations for that are mostly things like money, prestige, scientific curiosity, wanting to make the world a better place (in a misguided/shorttermist way), etc., and not so much wanting to take over the world or to defend against such attempts. This seems likely to persist in the near future, but my concern is that if AGI research gets sufficiently close to fruition, governments will inevitably get involved and start pushing it even harder due to national security considerations. (Recall that Manhattan Project started 8 years before detonation of the first nuke.) Your argument seems more about what’s happening now, and does not really address this concern.
I’m uncertain, given the potential for AGI to be used to reduce other x-risks. (I don’t have strong opinions on how large other x-risks are and how much potential there is for AGI to differentially help.) But I’m happy to accept this as a premise.
I think what’s happening now is a good guide into what will happen in the future, at least on short timelines. If AGI is >100 years away, then sure, a lot will change and current facts are relatively unimportant. If it’s < 20 years away, then current facts seem very relevant. I usually focus on the shorter timelines.
For min(20 years, time till AGI), for each individual trend I identified, I’d weakly predict that trend will continue (except perhaps openness, because that’s already changing).