In terms of time-scales, I am pretty ignorant, but I personally will not be too surprised if the highest risk period is in only a couple years, nor if it is in more than thirty years.
That sounds like a good estimate of the uncertainty, but is it communicated well to those who decide to drop everything and work on AI safety?
I agree that many of those who decide to drop everything to work on AI expect AI sooner than that. (Though far from all.)
It seems to me though that even if AI is in fact coming fairly soon, e.g. in 5 years, this is probably still not-helpful for reducing AI risk in most cases, compared to continuing to have hobbies and to not eat one’s long-term deep interests and spiritual health and ability to make new sense of things.
I agree that the time frame of 5-30 years is more like a marathon than a sprint, but those you are talking about treat it like a sprint. It would make sense if there was a clear low-uncertainly estimate of “we have to finish in 5 years, and we have 10 years worth of work to do” to better get cracking, everything else is on hold. But it seems like a more realistic estimate is “the TAI timeline is between a few years and a few decades, and we have no clue how much work AI Safety entails, or if it is even an achievable goal. Worse, we cannot even estimate the effort required to figure out if the goal is achievable, or even meaningful.” In this latter case, it’s a marathon on unknown length, and one has to pace themselves. I wonder if this message is intentionally minimized to keep the sense of urgency going.
That sounds like a good estimate of the uncertainty, but is it communicated well to those who decide to drop everything and work on AI safety?
I agree that many of those who decide to drop everything to work on AI expect AI sooner than that. (Though far from all.)
It seems to me though that even if AI is in fact coming fairly soon, e.g. in 5 years, this is probably still not-helpful for reducing AI risk in most cases, compared to continuing to have hobbies and to not eat one’s long-term deep interests and spiritual health and ability to make new sense of things.
Am I missing what you’re saying?
I agree that the time frame of 5-30 years is more like a marathon than a sprint, but those you are talking about treat it like a sprint. It would make sense if there was a clear low-uncertainly estimate of “we have to finish in 5 years, and we have 10 years worth of work to do” to better get cracking, everything else is on hold. But it seems like a more realistic estimate is “the TAI timeline is between a few years and a few decades, and we have no clue how much work AI Safety entails, or if it is even an achievable goal. Worse, we cannot even estimate the effort required to figure out if the goal is achievable, or even meaningful.” In this latter case, it’s a marathon on unknown length, and one has to pace themselves. I wonder if this message is intentionally minimized to keep the sense of urgency going.