I had a similar gut reaction. When I tried to run down my brain’s root causes of the view, this is what it came out as:
There are two kinds of problem you can encounter in politics. One type is where many people disagree with you on an issue. The other type is where almost everyone agrees with you on the issue, but most people are not paying attention to it.
Protests as a strategy are valuable in the second case, but worthless or counterproductive in the first case.
If you are being knifed in an alleyway, your best strategy is to make as much noise as possible. Your goal is to attract people to help you. You don’t really need to worry that your yelling might annoy people. There isn’t a meaningful risk that people will come by, see the situation, and then decide that they want to side with the guy knifing you. If lots of people start looking at the situation, you win. Your loss condition is ‘no-one pays attention’, not ‘people pay attention but take the opposite side’.
And if you are in an isomorphic sort of political situation, where someone is doing something that basically everyone agrees is genuinely outrageous but nobody is really paying attention to, protests are a valuable strategy. They will annoy people, but they will draw attention to this issue where you are uncontroversially right in a way that people will immediately notice.
But if you are in an argument where substantial numbers of people disagree with you, protests are a much less enticing strategy, and one that often seems to boil down to saying ‘lots of people disagree with me, but I’m louder and more annoying than them, so you should listen to me’.
And ‘AI development is a major danger’ is very much the ‘disagreement’ kind of issue at the moment. There is not broad social consensus that AI development is dangerous such that ‘get lots of people to look at what Meta is doing’ will lead to good outcomes.
I have no actual expertise in politics and don’t actually know this to be true, but it seems to be what my subconscious thinks on this issue.
I think that, in particular, protesting Meta releasing their models to the public is a lot less likely to go well than protesting, say, OpenAI developing their models. Releasing models to the public seems virtuous on its face both to the general public and to many technologists. Protesting that is going to draw attention to that specifically and so tend to paint the developers of more advanced models in a comparatively better light and their opponents in a comparatively worse light compared.
I had a similar gut reaction. When I tried to run down my brain’s root causes of the view, this is what it came out as:
I have no actual expertise in politics and don’t actually know this to be true, but it seems to be what my subconscious thinks on this issue.
I think that, in particular, protesting Meta releasing their models to the public is a lot less likely to go well than protesting, say, OpenAI developing their models. Releasing models to the public seems virtuous on its face both to the general public and to many technologists. Protesting that is going to draw attention to that specifically and so tend to paint the developers of more advanced models in a comparatively better light and their opponents in a comparatively worse light compared.
I agree with your assessment of the situation a lot, but I disagree that there is all that much controversy about this issue in the broader public. There is a lot of controversy on lesswrong, and in tech, but the public as a whole is in favor of slowing down and regulating AI developments. (Although other AI companies think sharing weights is really irresponsible and there are anti-competitive issues with llama 2’s ToS, which why it isn’t actually open source.) https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/
The public doesn’t understand the risks of sharing model weights so getting media attention to this issue will be helpful.