Why do you think the scale of the bias is unlikely to be more than a few decades?
Many expert physicists declared flight by humans impossible (e.g. Kelvin). Historical examples of a key insight taking a discovery from “impossible” or distant to very near term seem to exist, so might AI be similar? (In such a case, the likelihood of AI by year X may be higher than experts say.)
Why do you think the scale of the bias is unlikely to be more than a few decades?
Because the differences between estimates made by people who should be highly selected for optimism (e.g. AGI researchers) and people who should be much less so (other AI researchers, and more importantly but more noisily, other people) are only a few decades.
Yeah, I’m just saying the median estimates probably don’t differ by that many decades—the thousand year estimates are relatively common, but don’t seem to be median for any groups that I know of.
I’m interested in your statement that “other people” have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I’m a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?
How about human level AI? How about AI that is above human intelligence but not called “a superintelligence”?
I feel like the general public is over-exposed to predictions of drastic apocalyptic change and phrasing is going to sway public opinion a lot, especially since they don’t have the same set of rigorous definitions to fall back on that a group of experts does.
Firstly, I only meant that ‘other’ people are probably only a few decades off from the predictions of AI people—note that AI people are much less optimistic than AGI people or futurists, with 20% or so predicting after this century.
My contacts don’t make me feel differently. I was actually only talking about the different groups in the MIRI dataset pictured above (as shown in the graph with four groups in earlier). Admittedly the ‘other’ group there is very small, so one can’t infer that much from it. I agree your contacts may be a better source of data, if you know their opinions in an unbiased way. I also doubt the non-AGI AI group is as strongly selected for optimism about eventual AGI from among humans as AGI people are from among AI people. Then since the difference between AI people and AGI people is only a couple of decades at the median, I doubt the difference between AI researchers and other informed people is that much larger.
It may be that people who make public comments at all tend to be a lot more optimistic than those who do not, though the relatively small apparent differences between surveys and public statements suggests not.
Why do you think the scale of the bias is unlikely to be more than a few decades?
Many expert physicists declared flight by humans impossible (e.g. Kelvin). Historical examples of a key insight taking a discovery from “impossible” or distant to very near term seem to exist, so might AI be similar? (In such a case, the likelihood of AI by year X may be higher than experts say.)
Because the differences between estimates made by people who should be highly selected for optimism (e.g. AGI researchers) and people who should be much less so (other AI researchers, and more importantly but more noisily, other people) are only a few decades.
According to this week’s Muehlhauser, as summarized by you:
What about the thousand year estimates? Rarity / outliers?
Yeah, I’m just saying the median estimates probably don’t differ by that many decades—the thousand year estimates are relatively common, but don’t seem to be median for any groups that I know of.
I’m interested in your statement that “other people” have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I’m a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?
How about human level AI? How about AI that is above human intelligence but not called “a superintelligence”?
I feel like the general public is over-exposed to predictions of drastic apocalyptic change and phrasing is going to sway public opinion a lot, especially since they don’t have the same set of rigorous definitions to fall back on that a group of experts does.
Firstly, I only meant that ‘other’ people are probably only a few decades off from the predictions of AI people—note that AI people are much less optimistic than AGI people or futurists, with 20% or so predicting after this century.
My contacts don’t make me feel differently. I was actually only talking about the different groups in the MIRI dataset pictured above (as shown in the graph with four groups in earlier). Admittedly the ‘other’ group there is very small, so one can’t infer that much from it. I agree your contacts may be a better source of data, if you know their opinions in an unbiased way. I also doubt the non-AGI AI group is as strongly selected for optimism about eventual AGI from among humans as AGI people are from among AI people. Then since the difference between AI people and AGI people is only a couple of decades at the median, I doubt the difference between AI researchers and other informed people is that much larger.
It may be that people who make public comments at all tend to be a lot more optimistic than those who do not, though the relatively small apparent differences between surveys and public statements suggests not.