I appreciate the comment, though I think there’s a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe).
If you want to engage further, here are some things I’d be excited to hear from you:
What are a few specific comms/advocacy opportunities you’re excited about//have funded?
What are a few specific comms/advocacy opportunities you view as net negative//have actively decided not to fund?
What are a few examples of hypothetical comms/advocacy opportunities you’ve been excited about?
What do you think about EG Max Tegmark/FLI, Andrea Miotti/Control AI, The Future Society, the Center for AI Policy, Holly Elmore, PauseAI, and other specific individuals or groups that are engaging in AI comms or advocacy?
I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics.
In the absence of this, people simply notice that OP doesn’t seem to be funding some of the main existing examples of comms/advocacy efforts, but they don’t really know why, and they don’t really know what kinds of comms/advocacy efforts you’d be excited about.
(An extra-heavy “personal capacity” disclaimer for the following opinions.) Yeah, I hear you that OP doesn’t have as much public writing about our thinking here as would be ideal for this purpose, though I think the increasingly adversarial environment we’re finding ourselves in limits how transparent we can be without undermining our partners’ work (as we’ve writtenaboutpreviously).
The set of comms/advocacy efforts that I’m personally excited about is definitely larger than the set of comms/advocacy efforts that I think OP should fund, since 1) that’s a higher bar, and 2) sometimes OP isn’t the right funder for a specific project. That being said:
So far, OP has funded AI policy advocacy efforts by the Institute for Progress and Sam Hammond. I personally don’t have a very detailed sense of how these efforts have been going, but the theory of impact for these was that both grantees have strong track records in communicating policy ideas to key audiences and a solid understanding of the technical and governance problems that policy needs to solve.
I’m excited about the EU efforts of FLI and The Future Society. In the EU context, it seems like these orgs were complementary, where FLI was willing to take steps (including the pause letter) that sparked public conversation and gave policymakers context that made TFS’s policy conversations more productive (despite raising some controversy). I have much less context on their US work, but from what I know, I respect the policymaker outreach and convening work that they do and think they are net-positive.
I think CAIP is doing good work so far, though they have less of a track record. I value their thinking about the effectiveness of different policy options, and they seem to be learning and improving quickly.
I don’t know as much about Andrea and Control AI, but my main current takes about them are that their anti-RSP advocacy should have been heavier on “RSPs are insufficient,” which I agree with, instead of “RSPs are counterproductive safety-washing,” which I think could have disincentivized companies from the very positive move of developing an RSP (as you and I discussed privately a while ago). MAGIC is an interesting and important proposal and worth further developing (though as with many clever acronyms I kind of wish it had been named differently).
I’m not sure what to think about Holly’s work and PauseAI. I think the open source protest where they gave out copies of a GovAI paper to Meta employees seemed good – that seems like the kind of thing that could start really productive thinking within Meta. Broadly building awareness of AI’s catastrophic potential seems really good, largely for the reasons Holly describes here. Specifically calling for a pause is complicated, both in terms of the goodness of the types of policies that could be called a pause and in terms of the politics (i.e., the public seems pretty on board, but it might backfire specifically with the experts that policymakers will likely defer to, but also it might inspire productive discussion around narrower regulatory proposals?). I think this cluster of activists can sometimes overstate or simplify their claims, which I worry about.
Some broader thoughts about what kinds of advocacy would be useful or not useful:
The most important thing, imo, is that whatever advocacy you do, you do it well. This sounds obvious, but importantly differs from “find the most important/neglected/tractable kind of advocacy, and then do that as well as you personally can do it.” For example, I’d be really excited about people who have spent a long time in Congress-type DC world doing advocacy that looks like meeting with staffers; I’d be excited about people who might be really good at writing trying to start a successful blog and social media presence; I’d be excited about people with a strong track record in animal advocacy campaigns applying similar techniques to AI policy. Basically I think comparative advantage is really important, especially in cases where the risk of backfire/poisoning the well is high.
In all of these cases, I think it’s very important to make sure your claims are not just literally accurate but also don’t have misleading implications and are clear about your level of confidence and the strength of the evidence. I’m very, very nervous about getting short-term victories by making bad arguments. Even Congress, not known for its epistemic and scientific rigor, has gotten concerned that AI safety arguments aren’t as rigorous as they need to be (even though I take issue with most of the specific examples they provide).
Relatedly, I think some of the most useful “advocacy” looks a lot like research: if an idea is currently only legible to people who live and breathe AI alignment, writing it up in a clear and rigorous way, such that academics, policymakers, and the public can interact with it, critique it, and/or become advocates for it themselves is very valuable.
This is obviously not a novel take, but I think other things equal advocacy should try not to make enemies. It’s really valuable that the issue remain somewhat bipartisan and that we avoid further alienating the AI fairness and bias communities and the mainstream ML community. Unfortunately “other things equal” won’t always hold, and sometimes these come with steep tradeoffs, but I’d be excited about efforts to build these bridges, especially by people who come from/have spent lots of time in the community to which they’re bridge-building.
I appreciate the comment, though I think there’s a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe).
If you want to engage further, here are some things I’d be excited to hear from you:
What are a few specific comms/advocacy opportunities you’re excited about//have funded?
What are a few specific comms/advocacy opportunities you view as net negative//have actively decided not to fund?
What are a few examples of hypothetical comms/advocacy opportunities you’ve been excited about?
What do you think about EG Max Tegmark/FLI, Andrea Miotti/Control AI, The Future Society, the Center for AI Policy, Holly Elmore, PauseAI, and other specific individuals or groups that are engaging in AI comms or advocacy?
I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics.
In the absence of this, people simply notice that OP doesn’t seem to be funding some of the main existing examples of comms/advocacy efforts, but they don’t really know why, and they don’t really know what kinds of comms/advocacy efforts you’d be excited about.
(An extra-heavy “personal capacity” disclaimer for the following opinions.) Yeah, I hear you that OP doesn’t have as much public writing about our thinking here as would be ideal for this purpose, though I think the increasingly adversarial environment we’re finding ourselves in limits how transparent we can be without undermining our partners’ work (as we’ve written about previously).
The set of comms/advocacy efforts that I’m personally excited about is definitely larger than the set of comms/advocacy efforts that I think OP should fund, since 1) that’s a higher bar, and 2) sometimes OP isn’t the right funder for a specific project. That being said:
So far, OP has funded AI policy advocacy efforts by the Institute for Progress and Sam Hammond. I personally don’t have a very detailed sense of how these efforts have been going, but the theory of impact for these was that both grantees have strong track records in communicating policy ideas to key audiences and a solid understanding of the technical and governance problems that policy needs to solve.
I’m excited about the EU efforts of FLI and The Future Society. In the EU context, it seems like these orgs were complementary, where FLI was willing to take steps (including the pause letter) that sparked public conversation and gave policymakers context that made TFS’s policy conversations more productive (despite raising some controversy). I have much less context on their US work, but from what I know, I respect the policymaker outreach and convening work that they do and think they are net-positive.
I think CAIP is doing good work so far, though they have less of a track record. I value their thinking about the effectiveness of different policy options, and they seem to be learning and improving quickly.
I don’t know as much about Andrea and Control AI, but my main current takes about them are that their anti-RSP advocacy should have been heavier on “RSPs are insufficient,” which I agree with, instead of “RSPs are counterproductive safety-washing,” which I think could have disincentivized companies from the very positive move of developing an RSP (as you and I discussed privately a while ago). MAGIC is an interesting and important proposal and worth further developing (though as with many clever acronyms I kind of wish it had been named differently).
I’m not sure what to think about Holly’s work and PauseAI. I think the open source protest where they gave out copies of a GovAI paper to Meta employees seemed good – that seems like the kind of thing that could start really productive thinking within Meta. Broadly building awareness of AI’s catastrophic potential seems really good, largely for the reasons Holly describes here. Specifically calling for a pause is complicated, both in terms of the goodness of the types of policies that could be called a pause and in terms of the politics (i.e., the public seems pretty on board, but it might backfire specifically with the experts that policymakers will likely defer to, but also it might inspire productive discussion around narrower regulatory proposals?). I think this cluster of activists can sometimes overstate or simplify their claims, which I worry about.
Some broader thoughts about what kinds of advocacy would be useful or not useful:
The most important thing, imo, is that whatever advocacy you do, you do it well. This sounds obvious, but importantly differs from “find the most important/neglected/tractable kind of advocacy, and then do that as well as you personally can do it.” For example, I’d be really excited about people who have spent a long time in Congress-type DC world doing advocacy that looks like meeting with staffers; I’d be excited about people who might be really good at writing trying to start a successful blog and social media presence; I’d be excited about people with a strong track record in animal advocacy campaigns applying similar techniques to AI policy. Basically I think comparative advantage is really important, especially in cases where the risk of backfire/poisoning the well is high.
In all of these cases, I think it’s very important to make sure your claims are not just literally accurate but also don’t have misleading implications and are clear about your level of confidence and the strength of the evidence. I’m very, very nervous about getting short-term victories by making bad arguments. Even Congress, not known for its epistemic and scientific rigor, has gotten concerned that AI safety arguments aren’t as rigorous as they need to be (even though I take issue with most of the specific examples they provide).
Relatedly, I think some of the most useful “advocacy” looks a lot like research: if an idea is currently only legible to people who live and breathe AI alignment, writing it up in a clear and rigorous way, such that academics, policymakers, and the public can interact with it, critique it, and/or become advocates for it themselves is very valuable.
This is obviously not a novel take, but I think other things equal advocacy should try not to make enemies. It’s really valuable that the issue remain somewhat bipartisan and that we avoid further alienating the AI fairness and bias communities and the mainstream ML community. Unfortunately “other things equal” won’t always hold, and sometimes these come with steep tradeoffs, but I’d be excited about efforts to build these bridges, especially by people who come from/have spent lots of time in the community to which they’re bridge-building.