Yes, an important question, though not one I wanted to tackle in this post!
In general, we seem to do better at predicting things when we use a model with moving parts, and we have opportunity to calibrate our probabilities for many parts of the model. If we built a model that made a negative prediction about the near-term prospects for a specific technology after we had calibrated many parts of the model on lots of available data, that should be a way to increase our confidence about the near-term prospects for that technology.
Yes, an important question, though not one I wanted to tackle in this post!
In general, we seem to do better at predicting things when we use a model with moving parts, and we have opportunity to calibrate our probabilities for many parts of the model. If we built a model that made a negative prediction about the near-term prospects for a specific technology after we had calibrated many parts of the model on lots of available data, that should be a way to increase our confidence about the near-term prospects for that technology.
The most detailed model for predicting AI that I know of is The Uncertain Future (not surprisingly, an SI project), though unfortunately the current Version 1.0 isn’t broken down into parts so small that they are easy to calibrate. For an overview of the motivations behind The Uncertain Future, see Changing the Frame of AI Futurism: From Storytelling to Heavy-Tailed, High-Dimensional Probability Distributions.