I don’t think of myself as someone who thinks in terms of cringe, much, but apparently I have this reaction. I don’t particularly endorse it, or any implications of it, but it’s there. Maybe it means I have some intuition that the thing is bad to do, or maybe it means I expect it to have some weird unexpected social effect. Maybe it will be mocked in a way that shows that it’s not actually a good symbolic social move. Maybe the intuition is something like: protests are the sort of thing that the weak side does, the side that will be mainly ignored, or perhaps mocked, and so making a protest puts stop-AI-ists in a weak social position. (I continue to not endorse any direct implication here, such as “this is bad to do for this reason”.) Why would someone in power and reasoning in terms of power, like Lecun, take the stop-AI-ists seriously, when they’ve basically publicly admitted to not having social power, i.e. to being losers? Someone in power can’t gain more power by cooperating with losers, and does not need to heed the demands of losers because they can’t threaten them in the arena of power. (I do endorse trying to be aware of this sort of dynamic. I hope to see some version of the protest that is good, and/or some version of updating on the results or non-results.)
[ETA: and to be extra clear, I definitely don’t endorse making decisions from within the frame of social symbolic moves and power dynamics and conflict. That sort of situation is something that we are to some extent always in, and sometimes forced to be in and sometimes want to be in, but that thinking-frame is never something that we are forced to or should want to restrict our thinking to.]
I had a similar gut reaction. When I tried to run down my brain’s root causes of the view, this is what it came out as:
There are two kinds of problem you can encounter in politics. One type is where many people disagree with you on an issue. The other type is where almost everyone agrees with you on the issue, but most people are not paying attention to it.
Protests as a strategy are valuable in the second case, but worthless or counterproductive in the first case.
If you are being knifed in an alleyway, your best strategy is to make as much noise as possible. Your goal is to attract people to help you. You don’t really need to worry that your yelling might annoy people. There isn’t a meaningful risk that people will come by, see the situation, and then decide that they want to side with the guy knifing you. If lots of people start looking at the situation, you win. Your loss condition is ‘no-one pays attention’, not ‘people pay attention but take the opposite side’.
And if you are in an isomorphic sort of political situation, where someone is doing something that basically everyone agrees is genuinely outrageous but nobody is really paying attention to, protests are a valuable strategy. They will annoy people, but they will draw attention to this issue where you are uncontroversially right in a way that people will immediately notice.
But if you are in an argument where substantial numbers of people disagree with you, protests are a much less enticing strategy, and one that often seems to boil down to saying ‘lots of people disagree with me, but I’m louder and more annoying than them, so you should listen to me’.
And ‘AI development is a major danger’ is very much the ‘disagreement’ kind of issue at the moment. There is not broad social consensus that AI development is dangerous such that ‘get lots of people to look at what Meta is doing’ will lead to good outcomes.
I have no actual expertise in politics and don’t actually know this to be true, but it seems to be what my subconscious thinks on this issue.
I think that, in particular, protesting Meta releasing their models to the public is a lot less likely to go well than protesting, say, OpenAI developing their models. Releasing models to the public seems virtuous on its face both to the general public and to many technologists. Protesting that is going to draw attention to that specifically and so tend to paint the developers of more advanced models in a comparatively better light and their opponents in a comparatively worse light compared.
A lot of things that are cringe are specifically that way because they violate a socially-maintained taboo. Overcoming the taboos against these sorts of actions and these sorts of topics is precisely what we should be doing.
The fact that it is cringey is exactly the reason I am going to participate
What if there’s a taboo against being able to pick and choose which taboos to “overcome”?
This seems likely to be the case otherwise the taboos on incest for example would have disappeared long ago in the US. Since there’s easily a hard core group, almost certainly >0.1% of the US population, who would be just as interested in ‘overcoming’ that taboo.
Particularly in the rationalist community it seems like protesting is seen as a very outgroup thing to do. But why should that be? Good on you for expanding your comfort zone—hope to see you there :)
I want to make a hopefully-sandboxed comment:
I don’t think of myself as someone who thinks in terms of cringe, much, but apparently I have this reaction. I don’t particularly endorse it, or any implications of it, but it’s there. Maybe it means I have some intuition that the thing is bad to do, or maybe it means I expect it to have some weird unexpected social effect. Maybe it will be mocked in a way that shows that it’s not actually a good symbolic social move. Maybe the intuition is something like: protests are the sort of thing that the weak side does, the side that will be mainly ignored, or perhaps mocked, and so making a protest puts stop-AI-ists in a weak social position. (I continue to not endorse any direct implication here, such as “this is bad to do for this reason”.) Why would someone in power and reasoning in terms of power, like Lecun, take the stop-AI-ists seriously, when they’ve basically publicly admitted to not having social power, i.e. to being losers? Someone in power can’t gain more power by cooperating with losers, and does not need to heed the demands of losers because they can’t threaten them in the arena of power. (I do endorse trying to be aware of this sort of dynamic. I hope to see some version of the protest that is good, and/or some version of updating on the results or non-results.)
[ETA: and to be extra clear, I definitely don’t endorse making decisions from within the frame of social symbolic moves and power dynamics and conflict. That sort of situation is something that we are to some extent always in, and sometimes forced to be in and sometimes want to be in, but that thinking-frame is never something that we are forced to or should want to restrict our thinking to.]
I had a similar gut reaction. When I tried to run down my brain’s root causes of the view, this is what it came out as:
I have no actual expertise in politics and don’t actually know this to be true, but it seems to be what my subconscious thinks on this issue.
I think that, in particular, protesting Meta releasing their models to the public is a lot less likely to go well than protesting, say, OpenAI developing their models. Releasing models to the public seems virtuous on its face both to the general public and to many technologists. Protesting that is going to draw attention to that specifically and so tend to paint the developers of more advanced models in a comparatively better light and their opponents in a comparatively worse light compared.
I agree with your assessment of the situation a lot, but I disagree that there is all that much controversy about this issue in the broader public. There is a lot of controversy on lesswrong, and in tech, but the public as a whole is in favor of slowing down and regulating AI developments. (Although other AI companies think sharing weights is really irresponsible and there are anti-competitive issues with llama 2’s ToS, which why it isn’t actually open source.) https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/
The public doesn’t understand the risks of sharing model weights so getting media attention to this issue will be helpful.
A lot of things that are cringe are specifically that way because they violate a socially-maintained taboo. Overcoming the taboos against these sorts of actions and these sorts of topics is precisely what we should be doing.
The fact that it is cringey is exactly the reason I am going to participate
What if there’s a taboo against being able to pick and choose which taboos to “overcome”?
This seems likely to be the case otherwise the taboos on incest for example would have disappeared long ago in the US. Since there’s easily a hard core group, almost certainly >0.1% of the US population, who would be just as interested in ‘overcoming’ that taboo.
Particularly in the rationalist community it seems like protesting is seen as a very outgroup thing to do. But why should that be? Good on you for expanding your comfort zone—hope to see you there :)
Seems very plausible to me.
Took me a while to figure out where the quoted line is in the post. Now I realize that you were the one cringing.
FWIW I plan to show up unless I have something unusually important to do that day.
I just tried to change it from being a quote to being in a box. But apparently you need a package to put a box around verbatim text in Latex. https://tex.stackexchange.com/questions/6260/how-to-draw-box-around-text-that-contains-a-verbatim-block
So feature suggestion: boxes. Or Latex packages.
I commend your introspection on this.