It’s a good paper overall, and I’m glad to see it’s been published—especially the Maes-Garreau material! (I wonder what Kevin Kelly made of our results? His reaction would’ve been neat to mention.)
But reading it all in one place, I think one part seems pretty weak: the criticizing of the ‘expert’ predictions. It seems to me like there ought to be more rigorous forms of assessment, and I wonder about possible explanations for the clumping at 20+ years: the full median-estimate graph seems to show a consistent expert trend post-1970s to put AI at x-2050 (I can’t read the dates because the graphs are so illegible, what the heck?) and also many recent predictions. Perhaps there really is a forming expert consensus and the clump is due to the topic gaining a great deal of attention recently, and then the non-expert predictions are just taking their cue from the experts (as one would hope!)
Re the graph quality: I’m REALLY sorry and I have to apologize to Stuart for the poor quality of images—it’s kind of my fault… When I typesetted the final version of the proceedings, they were in A5 format, but on A4 page. We sent it to the printing company and they ran it through some program that cropped the pages to A5. Alas, this program also terribly compressed the images and I didn’t check it carefully before letting them print it. So this is it… Once more sorry about that.
The only thing I can do is to fix it in this electronic version—will be done asap.
Well, at least it’s partially fixed… (Actually this reminds me that, as ElGalambo pointed out earlier, I should update the Wikipedia Maes-Garreau article.)
(much better to use that than to squint at the pictures!)
My subjective impressions: predictors very rarely quote or reference each other when making predictions. Many predictions seem purely an individual guess. I’ve seen no sign of an expert consensus, or of much experts critiquing or commending each other’s work. I really feel that predicting AI has not been seen as something where anyone should listen to other people’s opinions. There are some exceptions—Kurzweil, for instance, seems famous enough that people are willing to quote his estimates, usually to claim he got it wrong—but too few.
My subjective impressions: predictors very rarely quote or reference each other when making predictions. Many predictions seem purely an individual guess. I’ve seen no sign of an expert consensus, or of much experts critiquing or commending each other’s work. I really feel that predicting AI has not been seen as something where anyone should listen to other people’s opinions.
They may not cite each other, but the influence can still be there as background reading etc. I may not cite Legge when I think there’s a good chance of breakthroughs in the 2020s but the influence is there (well, it was until I mentioned him just now). To give a real-world example, compiling http://www.gwern.net/2012%20election%20predictions I know that the forecasters were all reading each others’ blogs or twitters etc because in scouring their sites I see enough cross-links or similar topics, but anyone who looked at just the relevant pages of predictions or prediction CSVs would miss that completely and think they were deriving their similar predictions from independent models.
I think there’s a lot of shared ideas and reading which rarely is explicitly cited in the same passage as a specific prediction with the exception of really offensive estimates like Kurzweil’s self-promoting (have you been reading the reviews of his latest book? Everyone’s dragging out Hofstadter’s old dog shit quote, which one can’t help but feel that he would not have been so explicit and crude if Kurzweil didn’t really rub him the wrong way). But I don’t know how one would test the consensus idea other than waiting and seeing whether expert predictions continue to cluster around 2040 even as we hit 2020s and 2030s.
I’m actually thinking that the “non-experts were no better than experts” bit is maybe a little misleading, as I remember seeing a lot of the non-experts base their predictions on what experts had been saying.
It’s a good paper overall, and I’m glad to see it’s been published—especially the Maes-Garreau material! (I wonder what Kevin Kelly made of our results? His reaction would’ve been neat to mention.)
But reading it all in one place, I think one part seems pretty weak: the criticizing of the ‘expert’ predictions. It seems to me like there ought to be more rigorous forms of assessment, and I wonder about possible explanations for the clumping at 20+ years: the full median-estimate graph seems to show a consistent expert trend post-1970s to put AI at x-2050 (I can’t read the dates because the graphs are so illegible, what the heck?) and also many recent predictions. Perhaps there really is a forming expert consensus and the clump is due to the topic gaining a great deal of attention recently, and then the non-expert predictions are just taking their cue from the experts (as one would hope!)
Hi
Re the graph quality: I’m REALLY sorry and I have to apologize to Stuart for the poor quality of images—it’s kind of my fault… When I typesetted the final version of the proceedings, they were in A5 format, but on A4 page. We sent it to the printing company and they ran it through some program that cropped the pages to A5. Alas, this program also terribly compressed the images and I didn’t check it carefully before letting them print it. So this is it… Once more sorry about that.
The only thing I can do is to fix it in this electronic version—will be done asap.
Anyway, thanks Stuart for your great talk!
Best wishes
Jan Romportl
Well, at least it’s partially fixed… (Actually this reminds me that, as ElGalambo pointed out earlier, I should update the Wikipedia Maes-Garreau article.)
The original data can be found via: http://lesswrong.com/lw/e79/ai_timeline_prediction_data/
(much better to use that than to squint at the pictures!)
My subjective impressions: predictors very rarely quote or reference each other when making predictions. Many predictions seem purely an individual guess. I’ve seen no sign of an expert consensus, or of much experts critiquing or commending each other’s work. I really feel that predicting AI has not been seen as something where anyone should listen to other people’s opinions. There are some exceptions—Kurzweil, for instance, seems famous enough that people are willing to quote his estimates, usually to claim he got it wrong—but too few.
They may not cite each other, but the influence can still be there as background reading etc. I may not cite Legge when I think there’s a good chance of breakthroughs in the 2020s but the influence is there (well, it was until I mentioned him just now). To give a real-world example, compiling http://www.gwern.net/2012%20election%20predictions I know that the forecasters were all reading each others’ blogs or twitters etc because in scouring their sites I see enough cross-links or similar topics, but anyone who looked at just the relevant pages of predictions or prediction CSVs would miss that completely and think they were deriving their similar predictions from independent models.
I think there’s a lot of shared ideas and reading which rarely is explicitly cited in the same passage as a specific prediction with the exception of really offensive estimates like Kurzweil’s self-promoting (have you been reading the reviews of his latest book? Everyone’s dragging out Hofstadter’s old dog shit quote, which one can’t help but feel that he would not have been so explicit and crude if Kurzweil didn’t really rub him the wrong way). But I don’t know how one would test the consensus idea other than waiting and seeing whether expert predictions continue to cluster around 2040 even as we hit 2020s and 2030s.
I’m actually thinking that the “non-experts were no better than experts” bit is maybe a little misleading, as I remember seeing a lot of the non-experts base their predictions on what experts had been saying.
Really? That wasn’t my recollection. But you probably saw the data more than I did, so I’ll bear that in mind in future!
The link now points to the fixed proceedings (better image resolution). Sorry once again. Jan