One thing that has always bothered me is that people confuse predictions of technological capability with predictions of policy. In terms of the former, the predictions of moon colonies and such were quite accurate. We could indeed have had cities on the moon before 2000 if we wished. We had the technological capability. In terms of the latter, though, they were not accurate at all. Ultimately, space policy turned away from such high-cost endeavours. The problem is that the trajectory of technology is much easier to predict than the trajectory of policy. Technology usually follows a steady curve upward. Human policy, on the other hand, is chaotic and full of free variables.
In short, we could indeed have had moon colonies, flying cars, and jetpacks, but we chose not to, for a variety of (good) reasons.
The ‘bomber will always get through’ prediction was inaccurate because it underestimated technological progress. Namely, the development of radar and later the development of ICBMs. Similarly, people dramatically failed to predict the rise of the internet or computing technology in general, due to 1. underestimating technological progress and 2. underestimating human creativity. In fact, failures of technological prediction tend to more frequently be failures of underestimation than overestimation.
This is why it’s always good to draw a line between these two concepts, and I fear that in your post you have not done so. We could, in principle, have a FOOM scenario. There is nothing we know about computing and intelligence that suggests this is not possible. On the other hand, whether we will choose to create a FOOM-capable AI is a matter of human policy. It could be that we unilaterally agree to put a ban on AI above a certain level (a sad and unenlightened decision, but it could happen). The point to all of this is that trying to foresee public policy is a fool’s errand. It’s better to assume the worst and plan accordingly.
One thing that has always bothered me is that people confuse predictions of technological capability with predictions of policy. In terms of the former, the predictions of moon colonies and such were quite accurate. We could indeed have had cities on the moon before 2000 if we wished. We had the technological capability. In terms of the latter, though, they were not accurate at all. Ultimately, space policy turned away from such high-cost endeavours. The problem is that the trajectory of technology is much easier to predict than the trajectory of policy. Technology usually follows a steady curve upward. Human policy, on the other hand, is chaotic and full of free variables.
In short, we could indeed have had moon colonies, flying cars, and jetpacks, but we chose not to, for a variety of (good) reasons.
The ‘bomber will always get through’ prediction was inaccurate because it underestimated technological progress. Namely, the development of radar and later the development of ICBMs. Similarly, people dramatically failed to predict the rise of the internet or computing technology in general, due to 1. underestimating technological progress and 2. underestimating human creativity. In fact, failures of technological prediction tend to more frequently be failures of underestimation than overestimation.
This is why it’s always good to draw a line between these two concepts, and I fear that in your post you have not done so. We could, in principle, have a FOOM scenario. There is nothing we know about computing and intelligence that suggests this is not possible. On the other hand, whether we will choose to create a FOOM-capable AI is a matter of human policy. It could be that we unilaterally agree to put a ban on AI above a certain level (a sad and unenlightened decision, but it could happen). The point to all of this is that trying to foresee public policy is a fool’s errand. It’s better to assume the worst and plan accordingly.