I’m interested in your statement that “other people” have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I’m a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?
How about human level AI? How about AI that is above human intelligence but not called “a superintelligence”?
I feel like the general public is over-exposed to predictions of drastic apocalyptic change and phrasing is going to sway public opinion a lot, especially since they don’t have the same set of rigorous definitions to fall back on that a group of experts does.
Firstly, I only meant that ‘other’ people are probably only a few decades off from the predictions of AI people—note that AI people are much less optimistic than AGI people or futurists, with 20% or so predicting after this century.
My contacts don’t make me feel differently. I was actually only talking about the different groups in the MIRI dataset pictured above (as shown in the graph with four groups in earlier). Admittedly the ‘other’ group there is very small, so one can’t infer that much from it. I agree your contacts may be a better source of data, if you know their opinions in an unbiased way. I also doubt the non-AGI AI group is as strongly selected for optimism about eventual AGI from among humans as AGI people are from among AI people. Then since the difference between AI people and AGI people is only a couple of decades at the median, I doubt the difference between AI researchers and other informed people is that much larger.
It may be that people who make public comments at all tend to be a lot more optimistic than those who do not, though the relatively small apparent differences between surveys and public statements suggests not.
I’m interested in your statement that “other people” have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I’m a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?
How about human level AI? How about AI that is above human intelligence but not called “a superintelligence”?
I feel like the general public is over-exposed to predictions of drastic apocalyptic change and phrasing is going to sway public opinion a lot, especially since they don’t have the same set of rigorous definitions to fall back on that a group of experts does.
Firstly, I only meant that ‘other’ people are probably only a few decades off from the predictions of AI people—note that AI people are much less optimistic than AGI people or futurists, with 20% or so predicting after this century.
My contacts don’t make me feel differently. I was actually only talking about the different groups in the MIRI dataset pictured above (as shown in the graph with four groups in earlier). Admittedly the ‘other’ group there is very small, so one can’t infer that much from it. I agree your contacts may be a better source of data, if you know their opinions in an unbiased way. I also doubt the non-AGI AI group is as strongly selected for optimism about eventual AGI from among humans as AGI people are from among AI people. Then since the difference between AI people and AGI people is only a couple of decades at the median, I doubt the difference between AI researchers and other informed people is that much larger.
It may be that people who make public comments at all tend to be a lot more optimistic than those who do not, though the relatively small apparent differences between surveys and public statements suggests not.