The examples make the point that it’s possible to be too pessimistic, and too confident in that pessimism. However, maybe we can figure out when we should be confidently pessimistic.
For example, we can be very confidently pessimistic about the prospects for squaring the circle or inventing perpetual motion. Here we have mathematical proofs of impossibility. I think we can be almost as confidently pessimistic about the near-term prospects for practical near-light-speed travel. Here we have a good understanding of the scope of the problem and of the capabilities of all practical sources of propulsion, and we can see that those capabilities are nowhere near enough.
Let’s not just leave it at “it’s possible to be too pessimistic.” How can we identify problems about which we can be confidently pessimistic?
Yes, an important question, though not one I wanted to tackle in this post!
In general, we seem to do better at predicting things when we use a model with moving parts, and we have opportunity to calibrate our probabilities for many parts of the model. If we built a model that made a negative prediction about the near-term prospects for a specific technology after we had calibrated many parts of the model on lots of available data, that should be a way to increase our confidence about the near-term prospects for that technology.
The examples make the point that it’s possible to be too pessimistic, and too confident in that pessimism. However, maybe we can figure out when we should be confidently pessimistic.
For example, we can be very confidently pessimistic about the prospects for squaring the circle or inventing perpetual motion. Here we have mathematical proofs of impossibility. I think we can be almost as confidently pessimistic about the near-term prospects for practical near-light-speed travel. Here we have a good understanding of the scope of the problem and of the capabilities of all practical sources of propulsion, and we can see that those capabilities are nowhere near enough.
Let’s not just leave it at “it’s possible to be too pessimistic.” How can we identify problems about which we can be confidently pessimistic?
Yes, an important question, though not one I wanted to tackle in this post!
In general, we seem to do better at predicting things when we use a model with moving parts, and we have opportunity to calibrate our probabilities for many parts of the model. If we built a model that made a negative prediction about the near-term prospects for a specific technology after we had calibrated many parts of the model on lots of available data, that should be a way to increase our confidence about the near-term prospects for that technology.
The most detailed model for predicting AI that I know of is The Uncertain Future (not surprisingly, an SI project), though unfortunately the current Version 1.0 isn’t broken down into parts so small that they are easy to calibrate. For an overview of the motivations behind The Uncertain Future, see Changing the Frame of AI Futurism: From Storytelling to Heavy-Tailed, High-Dimensional Probability Distributions.