A somewhat messy braindump[1] of my thoughts on this style of approach:
I’m excited to see you interested in participating in the distributed network of humans trying to ensure AI goes well! I don’t currently think that protests are a useful collective action; they barely work to petition, and I suspect are heavily amplified by toxoplasmosis of conflict. Synchronized, organized expression of demand works best when a large group of people are a dependency of the system, because a protest shows that those engaging in the collective action of showing up could also engage in other collective actions. But openai doesn’t need you to like them, and in fact the power imbalance with openai is already frighteningly intense. Reaching the sort of many-vs-few bargaining position that collective action seeks to create seems to me to mostly amplify thoughtless mammal-conflict response in the openai team. Strategic collective action probably looks more like figuring out ways to get many many programmers to think about how to survive the superintelligence transition at a technical level. If you’re interested in that form of capability-selected collective action, I’d love to talk about it, and would be enthusiastic to upvote a post discussing it. I’d also be interested in discussing what sorts of interpolitical societal discussion[2] are valuable. I’m a big fan of solidarity as a base on which to build, and though I disagree with the global left in some ways about how to best achieve solidarity, what part of the global left doesn’t? in particular, though, in this situation, I think it is important to impress upon those used to being in charge at the top that all-of-humanity solidarity, not just human class solidarity, is needed to prevent the rise of catastrophic AI. If you’d like to do political organizing for the task of preventing the rise of dangerous AI, I’d suggest reading up on how ai policy people are currently thinking about it. It’s unusually important to avoid polarizing because right now both left and right are terrified of AI for basically the same reason, though of course they’re phrasing it differently (“preserving our values and preventing extinction” on the right, as usual, vs “preserving worker rights and autonomy” on the left, again as usual; the unusual thing is that this time, these refer to the same thing!)
A somewhat messy braindump[1] of my thoughts on this style of approach:
I’m excited to see you interested in participating in the distributed network of humans trying to ensure AI goes well! I don’t currently think that protests are a useful collective action; they barely work to petition, and I suspect are heavily amplified by toxoplasmosis of conflict. Synchronized, organized expression of demand works best when a large group of people are a dependency of the system, because a protest shows that those engaging in the collective action of showing up could also engage in other collective actions. But openai doesn’t need you to like them, and in fact the power imbalance with openai is already frighteningly intense. Reaching the sort of many-vs-few bargaining position that collective action seeks to create seems to me to mostly amplify thoughtless mammal-conflict response in the openai team. Strategic collective action probably looks more like figuring out ways to get many many programmers to think about how to survive the superintelligence transition at a technical level. If you’re interested in that form of capability-selected collective action, I’d love to talk about it, and would be enthusiastic to upvote a post discussing it. I’d also be interested in discussing what sorts of interpolitical societal discussion[2] are valuable. I’m a big fan of solidarity as a base on which to build, and though I disagree with the global left in some ways about how to best achieve solidarity, what part of the global left doesn’t? in particular, though, in this situation, I think it is important to impress upon those used to being in charge at the top that all-of-humanity solidarity, not just human class solidarity, is needed to prevent the rise of catastrophic AI. If you’d like to do political organizing for the task of preventing the rise of dangerous AI, I’d suggest reading up on how ai policy people are currently thinking about it. It’s unusually important to avoid polarizing because right now both left and right are terrified of AI for basically the same reason, though of course they’re phrasing it differently (“preserving our values and preventing extinction” on the right, as usual, vs “preserving worker rights and autonomy” on the left, again as usual; the unusual thing is that this time, these refer to the same thing!)
I was one of the main examples criticized in “No, you need to write clearer”, oh well i guess,
eg, memetics, but that dismisses how much people can think about things before contributing