The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.
I’m not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.
Many of them may be predicted using the same logic. For example, we may try to estimate next time nuclear weapons will be used in war, based on a fact that they were used once in 1945. It results in 75 per cent probability for next 105 years.
see also a comment below.
I’m not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.
Many of them may be predicted using the same logic. For example, we may try to estimate next time nuclear weapons will be used in war, based on a fact that they were used once in 1945. It results in 75 per cent probability for next 105 years. see also a comment below.