I think what you said is right, but there’s a more fundamental dynamics behind it. Parties are coalitions, and when you join a coalition, you get support from others in that coalition for your interests, in exchange for your support for their interests. When I said “use political means to try to prevent that”, that includes either building or joining a coalition to increase the political power behind your agenda, and it’s often much easier to join/ally with an existing party than to build a new coalition from scratch. This naturally causes your opposition to join/ally with the other party/coalition.
Applying some of this to the AI case: the activist stuff has already happened. However, the AI corporations (the equiv of big-oil in our climate story) haven’t reacted in the same way big-oil did. At least public-facingly, they’ve actually recognized and embraced the concerns to a sizeable degree (see Google DeepMind, OpenAI, to some degree Facebook).
Another way to look at it though, is that the AI companies have co-opted some of the people concerned with AI risk (those on the more optimistic end of the spectrum) and cowed the rest (the more pessimistic ones, who think humanity should stop or slow down AI development) into silence (or at least only talking quietly amongst themselves). The more pessimistic researchers/activists know that they don’t have nearly enough political power to win any kind of open conflict now, so they are biding their time, trying to better understand AI risk (in part to build a stronger public case for it), doing what they can around the edges, and looking for strategic openings. (The truth is probably somewhere between these two interpretations.)
And if it really is as neglected as you think, I may take up thinking about it again a bit more seriously.
Another way to look at it though, is that the AI companies have co-opted some of the people concerned with AI risk (those on the more optimistic end of the spectrum) and cowed the rest...
Huh, that’s an interesting point.
I’m not sure where I stand on the question of “should we be pulling the brakes now,” but I definitely think it would be good if we had the ability to pull the brakes should it become necessary. It hadn’t really occurred to me that those who think we should be pulling the brakes now would feel quasi-political pressure not to speak out. I assumed the reason there’s not much talk of that option is because it’s so clearly unrealistic at this point; but I’m all in favor of building the capacity to do so (modulo Caplan-style worries about this accidentally going to far and leading to totalitarianism), and it never really occurred to me that this would be a controversial opinion.
It looks like your background is in philosophy
Yep!
check out Problems in AI Alignment that philosophers could potentially contribute to, in case you haven’t come across it already.
I had come across it before, but it was a while ago, so I took another look. I was already planning on working on some stuff in the vicinity of the “Normativity for AI / AI designers” and “Metaethical policing” bullets (namely the problem raised in theseposts by gworley), but looking at it again, the other stuff under those bullets, as well as the metaphilosophy bullet, sound quite interesting. I’m also planning on doing some work on moral uncertainty (which, in addition to its relevance to global priorities research, also has some relevance for AI; based on my cursory understanding, CIRL seems to incorporate the idea of moral uncertainty to some extent), and perhaps other GPI-style topics. AI-strategy/governance stuff, including the topics in the OP, are also interesting, and I’m actually inclined to think that they may be more important than technical AI safety (though not far more important). But three disparate areas, all calling for disparate areas of expertise outside philosophy (AI: compsci; GPR: econ etc; strategy: international relations), feels a bit like too much, and I’m not certain which I ultimately should settle on (though I have a bit of time, I’m at the beginning of my PhD atm). I guess relevant factors are mostly the standard ones: which do I find most motivating/fun to work on, which can I skill-up in fastest/easiest, which is most important/tractable/neglected? And which ones lead to a reasonable back-up plan/off-ramp in case high-risk jobs like academia/EA-org don’t work out?
Forgot one other thing I intend to work on: I’ve seen several people (perhaps even you?) say that the case for AI risk needs to be made more carefully than it has, that’s another project I may potentially work on.
I think what you said is right, but there’s a more fundamental dynamics behind it. Parties are coalitions, and when you join a coalition, you get support from others in that coalition for your interests, in exchange for your support for their interests. When I said “use political means to try to prevent that”, that includes either building or joining a coalition to increase the political power behind your agenda, and it’s often much easier to join/ally with an existing party than to build a new coalition from scratch. This naturally causes your opposition to join/ally with the other party/coalition.
Another way to look at it though, is that the AI companies have co-opted some of the people concerned with AI risk (those on the more optimistic end of the spectrum) and cowed the rest (the more pessimistic ones, who think humanity should stop or slow down AI development) into silence (or at least only talking quietly amongst themselves). The more pessimistic researchers/activists know that they don’t have nearly enough political power to win any kind of open conflict now, so they are biding their time, trying to better understand AI risk (in part to build a stronger public case for it), doing what they can around the edges, and looking for strategic openings. (The truth is probably somewhere between these two interpretations.)
Sounds good to me. It looks like your background is in philosophy, and I never thought I’d be seriously thinking about politics myself, but comparative advantage can be counter-intuitive. BTW, please check out Problems in AI Alignment that philosophers could potentially contribute to, in case you haven’t come across it already.
Huh, that’s an interesting point.
I’m not sure where I stand on the question of “should we be pulling the brakes now,” but I definitely think it would be good if we had the ability to pull the brakes should it become necessary. It hadn’t really occurred to me that those who think we should be pulling the brakes now would feel quasi-political pressure not to speak out. I assumed the reason there’s not much talk of that option is because it’s so clearly unrealistic at this point; but I’m all in favor of building the capacity to do so (modulo Caplan-style worries about this accidentally going to far and leading to totalitarianism), and it never really occurred to me that this would be a controversial opinion.
Yep!
I had come across it before, but it was a while ago, so I took another look. I was already planning on working on some stuff in the vicinity of the “Normativity for AI / AI designers” and “Metaethical policing” bullets (namely the problem raised in these posts by gworley), but looking at it again, the other stuff under those bullets, as well as the metaphilosophy bullet, sound quite interesting. I’m also planning on doing some work on moral uncertainty (which, in addition to its relevance to global priorities research, also has some relevance for AI; based on my cursory understanding, CIRL seems to incorporate the idea of moral uncertainty to some extent), and perhaps other GPI-style topics. AI-strategy/governance stuff, including the topics in the OP, are also interesting, and I’m actually inclined to think that they may be more important than technical AI safety (though not far more important). But three disparate areas, all calling for disparate areas of expertise outside philosophy (AI: compsci; GPR: econ etc; strategy: international relations), feels a bit like too much, and I’m not certain which I ultimately should settle on (though I have a bit of time, I’m at the beginning of my PhD atm). I guess relevant factors are mostly the standard ones: which do I find most motivating/fun to work on, which can I skill-up in fastest/easiest, which is most important/tractable/neglected? And which ones lead to a reasonable back-up plan/off-ramp in case high-risk jobs like academia/EA-org don’t work out?
Forgot one other thing I intend to work on: I’ve seen several people (perhaps even you?) say that the case for AI risk needs to be made more carefully than it has, that’s another project I may potentially work on.