I’m pleased with this dialogue and glad I did it. Outreach to policymakers is an important & complicated topic. No single post will be able to explain all the nuances, but I think this post explains a lot, and I still think it’s a useful resource for people interested in engaging with policymakers.
A lot has changed since this dialogue, and I’ve also learned a lot since then. Here are a few examples:
I think it’s no longer as useful to emphasize “AI is a big deal for national/global security.” This is now pretty well-established.
Instead, I would encourage people to come up with clear explanations of specific threat models (especially misalignment risks) and concrete proposals (e.g., draft legislative language, memos with specific asks for specific agencies).
I’d like to see more people write about why AI requires different solutions compared to the “standard DC playbook for dealing with potentially dangerous emerging technologies.” As I understand it, the standard playbook is essentially: “If there is a new and dangerous technology, the US needs to make sure that we lead in its development and we are ahead of the curve. The main threats come from our adversaries being able to unlock such technologies faster than us, allowing them to surprise us with new threats.” To me, the main reason this playbook doesn’t work is because of misalignment risks. Regardless: if you think AI is special (for misalignment reasons or other reasons), I think writing up your takes RE “here’s what makes AI special and why it requires a deviation from the standard playbook” is valuable.
I think people trying to communicate with US policymakers should keep in mind that the US government is primarily concerned with US interests. This is perhaps obvious when stated like this, but I think a lot of comms fail to properly take this into account. As one might expect, this is especially true when foreign organizations try to talk about things from the POV of what would we best for “humanity” or “global society.” TBC I think there are many contexts where such analysis is useful. But I think on-the-margin I’d like to see more people thinking from a “realist US” perspective. That means acknowledging that a lot of US stakeholders view a lot of national security and emerging technology issues through the lens of great power competition, maintaining US economic/security dominance, and ensuring that US values continue to shape the world. This doesn’t mean that the US would never enter into deals/agreements with other nations, but rather than the case for the deal/agreement will be evaluated primarily from the vantage point of US interests.
RE learning “DC culture”, I don’t think there’s any substitute for actually going to DC and talking to people. But I think books and case studies can help.
Also interested in understanding decision-making around the 2008 financial crisis, 9/11, the COVID pandemic, and the recent TikTok ban. Many people on this forum believe that there’s a non-trivial chance that AI produces a catastrophe or other “big wakeup moment” for policymakers, and I think we need more people with an understanding of history/IR/security studies/DC decision-making.
I think some people are overconfident in their perceptions of “what Republicans think” and “what Democrats think.” There is a lot of within-party split on AI and even more broadly on issues like tech policy, foreign policy, and national security. For example, while Republicans are typically considered more “hawkish”, there are plenty of noteworthy counterexamples. Reagan championed negotiations on arms control agreements. See the “Only Nixon could go to China” effect (I haven’t looked into it much but it seems plausible). Trump has recently expressed that “China and the US can together solve problems in the world” and described Xi as “an amazing guy.”
I think a lot of comms has focused on arguments with high-context people, but on-the-margin I’d rather see more content that is oriented toward “reasonable newcomers.” A lot of content I see is reactionary. It’s very tempting to see something Totally Wrong on the internet and want to correct it. And of course, we need some of that to happen– there is value to fact-checking and debating with high-context folks (people who have been thinking about advanced AI for a while) who have different perspectives.
But on the margin, I think I’d be excited to see more content that is aimed at a “reasonable newcomer”– EG, a national security expert who recently got assigned the task of “understanding what is going on with advanced AI.” To some extent, this will require addressing arguments from people who are Totally Wrong TM. But I think the more basic and important thing is having good resources that walk them through what you believe, why you believe it’s true, and what implications it has. (CC the thing I said earlier about “what makes AI different and why can’t we just apply the Standard Playbook for Dangerous Emerging Tech.”)
I’ll conclude by noting that I remain quite interested in topics like “how to communicate about AI accurately and effectively with policymakers”, “what are the best federal AI policy ideas”, and “what are the specific points about AI that are most important for policymakers to understand.”
If you’re interested in any of this, feel free to reach out!
I’m pleased with this dialogue and glad I did it. Outreach to policymakers is an important & complicated topic. No single post will be able to explain all the nuances, but I think this post explains a lot, and I still think it’s a useful resource for people interested in engaging with policymakers.
A lot has changed since this dialogue, and I’ve also learned a lot since then. Here are a few examples:
I think it’s no longer as useful to emphasize “AI is a big deal for national/global security.” This is now pretty well-established.
Instead, I would encourage people to come up with clear explanations of specific threat models (especially misalignment risks) and concrete proposals (e.g., draft legislative language, memos with specific asks for specific agencies).
I’d like to see more people write about why AI requires different solutions compared to the “standard DC playbook for dealing with potentially dangerous emerging technologies.” As I understand it, the standard playbook is essentially: “If there is a new and dangerous technology, the US needs to make sure that we lead in its development and we are ahead of the curve. The main threats come from our adversaries being able to unlock such technologies faster than us, allowing them to surprise us with new threats.” To me, the main reason this playbook doesn’t work is because of misalignment risks. Regardless: if you think AI is special (for misalignment reasons or other reasons), I think writing up your takes RE “here’s what makes AI special and why it requires a deviation from the standard playbook” is valuable.
I think people trying to communicate with US policymakers should keep in mind that the US government is primarily concerned with US interests. This is perhaps obvious when stated like this, but I think a lot of comms fail to properly take this into account. As one might expect, this is especially true when foreign organizations try to talk about things from the POV of what would we best for “humanity” or “global society.” TBC I think there are many contexts where such analysis is useful. But I think on-the-margin I’d like to see more people thinking from a “realist US” perspective. That means acknowledging that a lot of US stakeholders view a lot of national security and emerging technology issues through the lens of great power competition, maintaining US economic/security dominance, and ensuring that US values continue to shape the world. This doesn’t mean that the US would never enter into deals/agreements with other nations, but rather than the case for the deal/agreement will be evaluated primarily from the vantage point of US interests.
RE learning “DC culture”, I don’t think there’s any substitute for actually going to DC and talking to people. But I think books and case studies can help.
Recent books I’ve read: John Boehner’s autobiography (former Speaker of the House, Republican), Leon Panetta’s autobiography (former CIA Director and Secretary of Defense, Democrat), and The Case for Trump. RE case studies, I’ve become interested in international security agreements (like the Iran Nuclear Deal and the Chemical Weapons Convention).
Also interested in understanding decision-making around the 2008 financial crisis, 9/11, the COVID pandemic, and the recent TikTok ban. Many people on this forum believe that there’s a non-trivial chance that AI produces a catastrophe or other “big wakeup moment” for policymakers, and I think we need more people with an understanding of history/IR/security studies/DC decision-making.
I think some people are overconfident in their perceptions of “what Republicans think” and “what Democrats think.” There is a lot of within-party split on AI and even more broadly on issues like tech policy, foreign policy, and national security. For example, while Republicans are typically considered more “hawkish”, there are plenty of noteworthy counterexamples. Reagan championed negotiations on arms control agreements. See the “Only Nixon could go to China” effect (I haven’t looked into it much but it seems plausible). Trump has recently expressed that “China and the US can together solve problems in the world” and described Xi as “an amazing guy.”
I think a lot of comms has focused on arguments with high-context people, but on-the-margin I’d rather see more content that is oriented toward “reasonable newcomers.” A lot of content I see is reactionary. It’s very tempting to see something Totally Wrong on the internet and want to correct it. And of course, we need some of that to happen– there is value to fact-checking and debating with high-context folks (people who have been thinking about advanced AI for a while) who have different perspectives.
But on the margin, I think I’d be excited to see more content that is aimed at a “reasonable newcomer”– EG, a national security expert who recently got assigned the task of “understanding what is going on with advanced AI.” To some extent, this will require addressing arguments from people who are Totally Wrong TM. But I think the more basic and important thing is having good resources that walk them through what you believe, why you believe it’s true, and what implications it has. (CC the thing I said earlier about “what makes AI different and why can’t we just apply the Standard Playbook for Dangerous Emerging Tech.”)
I’ll conclude by noting that I remain quite interested in topics like “how to communicate about AI accurately and effectively with policymakers”, “what are the best federal AI policy ideas”, and “what are the specific points about AI that are most important for policymakers to understand.”
If you’re interested in any of this, feel free to reach out!