I wonder if the initial 67% in favor of x-risk was less reflective of the audience’s opinion on AI specifically, but a general application of the heuristic “<X fancy new technology> = scary, needs regulation.”
(That is, if you replaced AI with any other technology that general audiences are vaguely aware of but don’t have a strong opinion on, such as CRISPR or nanotech, would they default to about the same number?)
Also, I would guess that hearing two groups of roughly equally smart-sounding people debate a topic one has no strong opinion on tends to revise one’s initial opinion closer to “looks like there’s a lot of complicated disagreement so idk maybe it’s 50⁄50 lol,” regardless of the actual specifics of the arguments made.
I wonder if the initial 67% in favor of x-risk was less reflective of the audience’s opinion on AI specifically, but a general application of the heuristic “<X fancy new technology> = scary, needs regulation.”
(That is, if you replaced AI with any other technology that general audiences are vaguely aware of but don’t have a strong opinion on, such as CRISPR or nanotech, would they default to about the same number?)
Also, I would guess that hearing two groups of roughly equally smart-sounding people debate a topic one has no strong opinion on tends to revise one’s initial opinion closer to “looks like there’s a lot of complicated disagreement so idk maybe it’s 50⁄50 lol,” regardless of the actual specifics of the arguments made.
That’s a good point, which is supported by the high share of 92% prepared to change their minds.