Another way to look at it though, is that the AI companies have co-opted some of the people concerned with AI risk (those on the more optimistic end of the spectrum) and cowed the rest...
Huh, that’s an interesting point.
I’m not sure where I stand on the question of “should we be pulling the brakes now,” but I definitely think it would be good if we had the ability to pull the brakes should it become necessary. It hadn’t really occurred to me that those who think we should be pulling the brakes now would feel quasi-political pressure not to speak out. I assumed the reason there’s not much talk of that option is because it’s so clearly unrealistic at this point; but I’m all in favor of building the capacity to do so (modulo Caplan-style worries about this accidentally going to far and leading to totalitarianism), and it never really occurred to me that this would be a controversial opinion.
It looks like your background is in philosophy
Yep!
check out Problems in AI Alignment that philosophers could potentially contribute to, in case you haven’t come across it already.
I had come across it before, but it was a while ago, so I took another look. I was already planning on working on some stuff in the vicinity of the “Normativity for AI / AI designers” and “Metaethical policing” bullets (namely the problem raised in theseposts by gworley), but looking at it again, the other stuff under those bullets, as well as the metaphilosophy bullet, sound quite interesting. I’m also planning on doing some work on moral uncertainty (which, in addition to its relevance to global priorities research, also has some relevance for AI; based on my cursory understanding, CIRL seems to incorporate the idea of moral uncertainty to some extent), and perhaps other GPI-style topics. AI-strategy/governance stuff, including the topics in the OP, are also interesting, and I’m actually inclined to think that they may be more important than technical AI safety (though not far more important). But three disparate areas, all calling for disparate areas of expertise outside philosophy (AI: compsci; GPR: econ etc; strategy: international relations), feels a bit like too much, and I’m not certain which I ultimately should settle on (though I have a bit of time, I’m at the beginning of my PhD atm). I guess relevant factors are mostly the standard ones: which do I find most motivating/fun to work on, which can I skill-up in fastest/easiest, which is most important/tractable/neglected? And which ones lead to a reasonable back-up plan/off-ramp in case high-risk jobs like academia/EA-org don’t work out?
Forgot one other thing I intend to work on: I’ve seen several people (perhaps even you?) say that the case for AI risk needs to be made more carefully than it has, that’s another project I may potentially work on.
Huh, that’s an interesting point.
I’m not sure where I stand on the question of “should we be pulling the brakes now,” but I definitely think it would be good if we had the ability to pull the brakes should it become necessary. It hadn’t really occurred to me that those who think we should be pulling the brakes now would feel quasi-political pressure not to speak out. I assumed the reason there’s not much talk of that option is because it’s so clearly unrealistic at this point; but I’m all in favor of building the capacity to do so (modulo Caplan-style worries about this accidentally going to far and leading to totalitarianism), and it never really occurred to me that this would be a controversial opinion.
Yep!
I had come across it before, but it was a while ago, so I took another look. I was already planning on working on some stuff in the vicinity of the “Normativity for AI / AI designers” and “Metaethical policing” bullets (namely the problem raised in these posts by gworley), but looking at it again, the other stuff under those bullets, as well as the metaphilosophy bullet, sound quite interesting. I’m also planning on doing some work on moral uncertainty (which, in addition to its relevance to global priorities research, also has some relevance for AI; based on my cursory understanding, CIRL seems to incorporate the idea of moral uncertainty to some extent), and perhaps other GPI-style topics. AI-strategy/governance stuff, including the topics in the OP, are also interesting, and I’m actually inclined to think that they may be more important than technical AI safety (though not far more important). But three disparate areas, all calling for disparate areas of expertise outside philosophy (AI: compsci; GPR: econ etc; strategy: international relations), feels a bit like too much, and I’m not certain which I ultimately should settle on (though I have a bit of time, I’m at the beginning of my PhD atm). I guess relevant factors are mostly the standard ones: which do I find most motivating/fun to work on, which can I skill-up in fastest/easiest, which is most important/tractable/neglected? And which ones lead to a reasonable back-up plan/off-ramp in case high-risk jobs like academia/EA-org don’t work out?
Forgot one other thing I intend to work on: I’ve seen several people (perhaps even you?) say that the case for AI risk needs to be made more carefully than it has, that’s another project I may potentially work on.