I copy and pasted the “Time To AI” chart and did some simple graphic manipulations to make the vertical and horizontal axis equal, extend the X-axis, and draw diagonal lines “down and to the right” to show which points predicted which dates. It was an even more interesting graphic that way!
It sort of looked like four or five gaussians representing four or five distinct theories were on display. All the early predictions (I assume that first one is Turing himself) go with a sort of “robots by 2000” prediction scheme that seems consistent with the Jetson’s and what might have happened without “the great stagnation”. All of the espousers of this theory published before the AI winter and you can see a gap in predictions being made on the subject from about 1978 to about 1994. Predicting AGI arrival in 2006 was never trendy, it seems to have always been predicted earlier or later.
The region from 2015 thru 2063 has either one or two groups betting on it because instead of “guassian-ish” it is strongly weighted towards the front end, suggesting perhaps a bimodal group that isn’t easy to break into two definite groups. One hump sometimes predicts dates out as late as the 2050′s, but the main group really likes the 2020′s and 2030′s. The first person to express anything like this theory was an expert in about 1979 (before the AI winter really set in, which is interesting) and I’m not sure who it was off the top of my head. There’s a massive horde expressing this general theory, but they seem to have come in a wave of non-experts during the dotcom bubble (predicting early-ish) and then there’s a gap in the aftermath of the bubble, then a wave of experts predicting a bit later.
Like 2006, the year 2072 is not very trendy for AGI predictions. However around 2080 to 2110 there seems to be a cluster that was lead by three non-expert opinions expressed in 1999 to 2003 (ie the dotcom bubble aftermath). A few years later five experts chime in to affirm the theory. I don’t recognize the theory by name or rhetoric but my rough label for their theory might be “the singularity is late” just based on the sparse data.
The final coherent theory seems to be four people predicting “2200”, my guess here is just that its really far in the future and a nice round number. Four people do this, two experts and two non-experts. It looks like two pre-bubble and two post bubble?
For what its worth, eyeballing my re-worked “Time to AI” figure indicates a median of about 2035, and my last moderately thoughtful calculation gave a median arrival of AGI at about 2037, with later arrivals being more likely to be “better” and, in the meantime, prevention of major wars or arms races being potentially more important to work on than AGI issues. The proximity of these dates to the year 2038 is pure ironic gravy, though I have always sort of suspected that one chunk of probability mass should take the singularity seriously because if it happens then it will be enormously important, while another chunk of probability mass should be methodologically mindful of the memetic similarities between the Y2K Bug and the Singularity (i.e. both of them being non-supernatural computer-based eschatologies which, whatever their ultimate truth status, would naturally propagate in roughly similar ways before the fact was settled).
How many degrees of freedom does your “composition of N theories” theory have? I’m not inclined to guess, since I don’t know how you went about this. I just want to point out that 260 is not many data points; clustering is very likely going to give highly non-reproducible results unless you’re very careful.
I went about it by manipulating the starting image in Microsoft Paint, stretching, annotating, and generally manipulating it until the “biases” (like different scales for the vertical and horizontal axis) were gone and inferences that seemed “sorta justified” had been crudely visualized. Then I wrote text that put the imagery into words, attempting to functionally serialize the image (like being verbally precise where my visualization seemed coherent, and verbally ambiguous where my visualization seemed fuzzy, and giving each “cluster” a paragraph).
Based on memory I’d guess it was 90-250 minutes of pleasantly spent cognitive focus, depending on how you count? (Singularity stuff is just a hobby, not my day job, and I’m more of a fox than an hedgehog.) The image is hideous relative to “publication standards for a journal”, and an honest methods section would mostly just read “look at the data, find a reasonable and interesting story, and do your best to tell the story well” and so it would probably not be reproducible by people who didn’t have similar “epistemic tastes”.
Despite the limits, if anyone wants to PM me an email address (plus a link back to this comment to remind me what I said here), I can forward the re-processed image to you so you can see it in all its craptastic glory.
I copy and pasted the “Time To AI” chart and did some simple graphic manipulations to make the vertical and horizontal axis equal, extend the X-axis, and draw diagonal lines “down and to the right” to show which points predicted which dates. It was an even more interesting graphic that way!
It sort of looked like four or five gaussians representing four or five distinct theories were on display. All the early predictions (I assume that first one is Turing himself) go with a sort of “robots by 2000” prediction scheme that seems consistent with the Jetson’s and what might have happened without “the great stagnation”. All of the espousers of this theory published before the AI winter and you can see a gap in predictions being made on the subject from about 1978 to about 1994. Predicting AGI arrival in 2006 was never trendy, it seems to have always been predicted earlier or later.
The region from 2015 thru 2063 has either one or two groups betting on it because instead of “guassian-ish” it is strongly weighted towards the front end, suggesting perhaps a bimodal group that isn’t easy to break into two definite groups. One hump sometimes predicts dates out as late as the 2050′s, but the main group really likes the 2020′s and 2030′s. The first person to express anything like this theory was an expert in about 1979 (before the AI winter really set in, which is interesting) and I’m not sure who it was off the top of my head. There’s a massive horde expressing this general theory, but they seem to have come in a wave of non-experts during the dotcom bubble (predicting early-ish) and then there’s a gap in the aftermath of the bubble, then a wave of experts predicting a bit later.
Like 2006, the year 2072 is not very trendy for AGI predictions. However around 2080 to 2110 there seems to be a cluster that was lead by three non-expert opinions expressed in 1999 to 2003 (ie the dotcom bubble aftermath). A few years later five experts chime in to affirm the theory. I don’t recognize the theory by name or rhetoric but my rough label for their theory might be “the singularity is late” just based on the sparse data.
The final coherent theory seems to be four people predicting “2200”, my guess here is just that its really far in the future and a nice round number. Four people do this, two experts and two non-experts. It looks like two pre-bubble and two post bubble?
For what its worth, eyeballing my re-worked “Time to AI” figure indicates a median of about 2035, and my last moderately thoughtful calculation gave a median arrival of AGI at about 2037, with later arrivals being more likely to be “better” and, in the meantime, prevention of major wars or arms races being potentially more important to work on than AGI issues. The proximity of these dates to the year 2038 is pure ironic gravy, though I have always sort of suspected that one chunk of probability mass should take the singularity seriously because if it happens then it will be enormously important, while another chunk of probability mass should be methodologically mindful of the memetic similarities between the Y2K Bug and the Singularity (i.e. both of them being non-supernatural computer-based eschatologies which, whatever their ultimate truth status, would naturally propagate in roughly similar ways before the fact was settled).
How many degrees of freedom does your “composition of N theories” theory have? I’m not inclined to guess, since I don’t know how you went about this. I just want to point out that 260 is not many data points; clustering is very likely going to give highly non-reproducible results unless you’re very careful.
I went about it by manipulating the starting image in Microsoft Paint, stretching, annotating, and generally manipulating it until the “biases” (like different scales for the vertical and horizontal axis) were gone and inferences that seemed “sorta justified” had been crudely visualized. Then I wrote text that put the imagery into words, attempting to functionally serialize the image (like being verbally precise where my visualization seemed coherent, and verbally ambiguous where my visualization seemed fuzzy, and giving each “cluster” a paragraph).
Based on memory I’d guess it was 90-250 minutes of pleasantly spent cognitive focus, depending on how you count? (Singularity stuff is just a hobby, not my day job, and I’m more of a fox than an hedgehog.) The image is hideous relative to “publication standards for a journal”, and an honest methods section would mostly just read “look at the data, find a reasonable and interesting story, and do your best to tell the story well” and so it would probably not be reproducible by people who didn’t have similar “epistemic tastes”.
Despite the limits, if anyone wants to PM me an email address (plus a link back to this comment to remind me what I said here), I can forward the re-processed image to you so you can see it in all its craptastic glory.