Yes, people say all sorts of unjustified stuff about AI as if their musings were true, out of excitement and carelessness. But the line of thought in the post is ultimately destructive because it sets low expectations for no good reason.
To use the scientific method just means to make falsifiable predictions. So any arbitrary hypothesis counts, no matter how outlandish, so long as it’s predictive. On the other hand, you don’t need to use science in order to reason, and since “human-level AI” is not available for experimental study, we can only reason about it. But it’s a pretty sure thing that such an AI will think that 1+1 equals 2…
There are no details here e.g. about the methodologies used to produce futurological predictions of the “time until X”, or about the premises employed in reasoning about AI dispositions and capabilities; and that means there’s no argument about the degree of reliability or usefulness that can be obtained when reasoning about AI; just the bare assertion, “not even as good as the worst of social science”. Also, there’s no consideration of the power of intention. A lot of the important statements in LW’s AI futurology are about designing an AI to have desired properties.
I’m constructing a detailed analysis of all these points for my “How to Predict AI” paper.
And there are few details about methodologies, yes—because the vast majority of predictions have no methodologies. The quality of predictions is really, really low, and there are reasons to suspect that even when the methodologies are better, the prediction is still barely better than guesswork.
My stub was an unjustified snark, but the general sentiment behind it—that AI predictions (especially timeline predictions) are less reliable that social science results—is, as far as I can tell, true.
Yes, people say all sorts of unjustified stuff about AI as if their musings were true, out of excitement and carelessness. But the line of thought in the post is ultimately destructive because it sets low expectations for no good reason.
To use the scientific method just means to make falsifiable predictions. So any arbitrary hypothesis counts, no matter how outlandish, so long as it’s predictive. On the other hand, you don’t need to use science in order to reason, and since “human-level AI” is not available for experimental study, we can only reason about it. But it’s a pretty sure thing that such an AI will think that 1+1 equals 2…
There are no details here e.g. about the methodologies used to produce futurological predictions of the “time until X”, or about the premises employed in reasoning about AI dispositions and capabilities; and that means there’s no argument about the degree of reliability or usefulness that can be obtained when reasoning about AI; just the bare assertion, “not even as good as the worst of social science”. Also, there’s no consideration of the power of intention. A lot of the important statements in LW’s AI futurology are about designing an AI to have desired properties.
I’m constructing a detailed analysis of all these points for my “How to Predict AI” paper.
And there are few details about methodologies, yes—because the vast majority of predictions have no methodologies. The quality of predictions is really, really low, and there are reasons to suspect that even when the methodologies are better, the prediction is still barely better than guesswork.
My stub was an unjustified snark, but the general sentiment behind it—that AI predictions (especially timeline predictions) are less reliable that social science results—is, as far as I can tell, true.