I’m confused about the game theory of this kind of protest. If protests don’t work, fine, no harm done either way. But if they do work, what’s to stop the “do publicize this!” crowd (accelerationists, open source AI people, etc) from protesting on their own? Also, I have no idea about the relative numbers, but what if they could protest with 10x the number of people?
I think the main thing stopping the accelerationists and open source enthusiasts from protesting with 10x as many people is that, whether for good reasons or not, there is much more opposition to AI progress and proliferation than support among the general public. (Admittedly this is probably less true in the Bay Area, but I would be surprised if it was even close to parity there and very surprised if it were 10x.)
Thanks, that’s very helpful context. In principle, I wouldn’t put too much stock in the specific numbers of a single poll, since those results depend too much on specific wording etc. But the trend in this poll is consistent enough over all questions that I’d be surprised if the questions could be massaged to get the opposite results, let alone ones 10x in favor of the accelerationist side.
(That said, I didn’t like the long multi-paragraph questions further down in the poll. I felt like many were phrased to favor the cautiousness side somewhat, which biases the corresponding answers. Fortunately there were also plenty of short questions without this problem.)
Thanks, that’s very helpful context. In principle, I wouldn’t put too much stock in the specific numbers of a single poll, since those results depend too much on specific wording etc. But the trend in this poll is consistent enough over all questions that I’d be surprised if the questions could be massaged to get the opposite results, let alone ones 10x in favor of the accelerationist side.
I believe this has been replicated consistently across many polls. For the results to change, reality (in the sense of popular opinion) likely has to change, rather than polling techniques.
On the other hand, popular opinion changing isn’t that unlikely, as it’s not exactly something that either voters or the elites have thought much about, and (fortunately) this has not yet hewn along partisan lines.
One very common pattern is, most people oppose a technology when it’s new and unfamiliar, then later once it’s been established for a little while and doesn’t seem so strange most people think it’s great.
In case of issues where one “pulls sideways”, politically speaking, I also expect indirect effects to be comparatively unimportant. But in political zero-ish-sum conflicts, I’m more apprehensive of indirect effects and arguments.
I’m confused about the game theory of this kind of protest. If protests don’t work, fine, no harm done either way. But if they do work, what’s to stop the “do publicize this!” crowd (accelerationists, open source AI people, etc) from protesting on their own? Also, I have no idea about the relative numbers, but what if they could protest with 10x the number of people?
I think the main thing stopping the accelerationists and open source enthusiasts from protesting with 10x as many people is that, whether for good reasons or not, there is much more opposition to AI progress and proliferation than support among the general public. (Admittedly this is probably less true in the Bay Area, but I would be surprised if it was even close to parity there and very surprised if it were 10x.)
Thanks, that’s very helpful context. In principle, I wouldn’t put too much stock in the specific numbers of a single poll, since those results depend too much on specific wording etc. But the trend in this poll is consistent enough over all questions that I’d be surprised if the questions could be massaged to get the opposite results, let alone ones 10x in favor of the accelerationist side.
(That said, I didn’t like the long multi-paragraph questions further down in the poll. I felt like many were phrased to favor the cautiousness side somewhat, which biases the corresponding answers. Fortunately there were also plenty of short questions without this problem.)
I believe this has been replicated consistently across many polls. For the results to change, reality (in the sense of popular opinion) likely has to change, rather than polling techniques.
On the other hand, popular opinion changing isn’t that unlikely, as it’s not exactly something that either voters or the elites have thought much about, and (fortunately) this has not yet hewn along partisan lines.
One very common pattern is, most people oppose a technology when it’s new and unfamiliar, then later once it’s been established for a little while and doesn’t seem so strange most people think it’s great.
Yeah, I’m afraid of this happening with AI even as the danger becomes clearer. It’s one reason we’re in a really important window for setting policy.
This seems to me like a second order correction which is unlikely to change the sign of the outcome.
In case of issues where one “pulls sideways”, politically speaking, I also expect indirect effects to be comparatively unimportant. But in political zero-ish-sum conflicts, I’m more apprehensive of indirect effects and arguments.