you agree that people are already pushing too hard for progress in AGI capability (relative to what’s ideal from a longtermist perspective)
I’m uncertain, given the potential for AGI to be used to reduce other x-risks. (I don’t have strong opinions on how large other x-risks are and how much potential there is for AGI to differentially help.) But I’m happy to accept this as a premise.
Your argument seems more about what’s happening now, and does not really address this concern.
I think what’s happening now is a good guide into what will happen in the future, at least on short timelines. If AGI is >100 years away, then sure, a lot will change and current facts are relatively unimportant. If it’s < 20 years away, then current facts seem very relevant. I usually focus on the shorter timelines.
For min(20 years, time till AGI), for each individual trend I identified, I’d weakly predict that trend will continue (except perhaps openness, because that’s already changing).
I’m uncertain, given the potential for AGI to be used to reduce other x-risks. (I don’t have strong opinions on how large other x-risks are and how much potential there is for AGI to differentially help.) But I’m happy to accept this as a premise.
I think what’s happening now is a good guide into what will happen in the future, at least on short timelines. If AGI is >100 years away, then sure, a lot will change and current facts are relatively unimportant. If it’s < 20 years away, then current facts seem very relevant. I usually focus on the shorter timelines.
For min(20 years, time till AGI), for each individual trend I identified, I’d weakly predict that trend will continue (except perhaps openness, because that’s already changing).