Speaking to Congressional staffers about AI risk
In May and June of 2023, I (Akash) had about 50-70 meetings about AI risks with congressional staffers. I had been meaning to write a post reflecting on the experience and some of my takeaways, and I figured it could be a good topic for a LessWrong dialogue. I saw that hath had offered to do LW dialogues with folks, and I reached out.
In this dialogue, we discuss how I decided to chat with staffers, my initial observations in DC, some context about how Congressional offices work, what my meetings looked like, lessons I learned, and some miscellaneous takes about my experience.
Context
Hey! In your message, you mentioned a few topics that relate to your time in DC.
I figured we should start with your experience talking to congressional offices about AI risk. I’m quite interested in learning more; there don’t seem to be many public resources on what that kind of outreach looks like.
How’d that start? What made you want to do that?
In March of 2023, I started working on some AI governance projects at the Center for AI Safety. One of my projects involved helping CAIS respond to a Request for Comments about AI Accountability that was released by the NTIA.
As part of that work, I started thinking a lot about what a good regulatory framework for frontier AI would look like. For instance: if I could set up a licensing regime for frontier AI systems, what would it look like? Where in the US government would it be housed? What information would I want it to assess?
I began to wonder how actual policymakers would react to these ideas. I was also curious to know more about how policymakers were thinking about AI extinction risks and catastrophic risks.
I started asking other folks in AI Governance. The vast majority had not talked to congressional staffers (at all). A few had experience talking to staffers but had not talked to them about AI risk. A lot of people told me that they thought engagement with policymakers was really important but very neglected. And of course, there are downside risks, so you don’t want someone doing it poorly.
After consulting something like 10-20 AI governance folks, I asked CAIS if I could go to DC and start talking to congressional offices. The goals were to (a) raise awareness about AI risks, (b) get a better sense of how congressional offices were thinking about AI risks, (c) get a better sense of what kinds of AI-related priorities people at congressional offices had, and (d) get feedback on my NTIA request for comment ideas.
CAIS approved, and I went to DC in May-June 2023. And just to be clear, this wasn’t something CAIS told me to do– this was more of an “Akash thing” that CAIS was aware was happening.
Whoa, that’s really interesting. A couple random questions:
And of course, there are downside risks, so you don’t want someone doing it poorly.
How does one go about doing it non-poorly? How does one learn to interact with policymakers?
Also, what’s your background? Did you do policy stuff before this?
Yeah, great question. I’m not sure what the best way to learn is, but here are some things I tried:
Talk to people who have experience interacting with policymakers. Ask them what they say, what they found surprising, what mistakes they made, what downside risks they’ve noticed, etc.
Read books. I found Master of the Senate and Act of Congress to be especially helpful. I’m currently reading The Devil’s Chessboard to better understand the CIA & intelligence agencies, and I’m finding it informative so far.
Do roleplays with policymakers you already know and ask them for blunt feedback.
Get practice in lower-stakes meetings, and use those experiences to iterate.
I hadn’t done much policy stuff before this. In college, I wrote for the Harvard Political Review and was involved in the Institute of Politics, but that’s a lot more academic than the “real-world policy engagement” stuff.
Arriving in DC & initial observations
That all makes sense. What did you do once you arrived in DC?
I cold-emailed congressional offices, as well as some executive branch folks. I also reached out to some EAs in DC. I also continued working on the NTIA Request for Comment (which was due June 6).
The initial plan was to have a few meetings, assess how they were going, and then have a bunch more if I thought they were going reasonably well.
In total, I ended up having about 50-70 meetings with congressional staffers (as well as some with folks at think tanks and folks in executive branch agencies, but I’ll focus this post on the meetings with congressional staffers).
I take it they went reasonably well, then?
According to me, yes! One thing I’ll note is that these can be somewhat hard to assess– like, staffers are supposed to be kind to people, and they’re not going to say things like “I thought you were an idiot” or “you wasted my time” or “I now have a worse impression of AI safety.”
With that caveat in mind:
I was surprised at how open-minded staffers seemed. The Overton Window has shifted so much recently, but at the time, I didn’t really know if people would be like “ha! Extinction risks? That sounds like sci-fi.”
The dominant vibe was “AI is really important, and I’m a busy staffer with 100 priorities, so I haven’t had time to learn about it. I’m really really excited to talk to someone who can tell me stuff about AI– I’ve been desperate to get up to speed on stuff.”
Staffers expressed a lot of gratitude for the opportunity to meet someone who was willing to answer basic questions about AI (e.g., what is a large language model and how is it different than other kinds of AI? How many companies do frontier AI?)
There were some “tangible” signals that things were going well. For instance, some staffers introduced me to other people they knew, some people sent me work that their offices were drafting, and a few people even introduced me to congresspeople (two in total).
Hierarchy of a Congressional office
That’s really interesting!
As a brief aside, could you paint me a picture of what a Congressional office looks like in terms of staffing hierarchy? Like, who did you typically speak to, and what relationship did they typically have to the congressperson?
Great question! So, my understanding is that a Congressional office typically has the following roles, listed from “most influential” to “least influential”:
Chief of Staff
Legislative Director
Legislative Assistant
Legislative Correspondent
Interns and fellows
There are some other roles too, but these tend to matter most from a legislative perspective.
Note that each office has its own vibe. Someone once told me “each Congressional office is its own start-up, and each Congressperson gets to run their office however they want.”
So in some offices, interns and fellows might actually have a lot of influence (e.g., if a Congressperson or a Legislative Director trusts the intern to be the subject matter expert on a specific topic). But in general, I think this hierarchy is pretty common.
I think I mostly spoke with people at the legislative assistant/legislative correspondent level. I also spoke with a few legislative directors.
Outreach to offices
Okay, that all makes sense. So, how did you end up going from a few meetings to 60-80?
I sent a mass email to tech policy staffers, and I was pretty impressed by the number who responded. The email was fairly short, mentioned that I was at CAIS, had 1-2 bullets about what CAIS does, and had a bullet point about the fact that I was working on an NTIA request for comment.
I think it was/is genuinely the case that Congressional staffers are extremely interested in AI content right now. Like, I don’t think I would’ve been able to have this many meetings if I was emailing people about other issues.
There was some sense that “AI is very hot right now, but no one really knows much about AI”. I think it’s unclear how long that will last (especially the “people don’t know much & offices haven’t made up their minds”) part.
I would go as far as to say something like “I think this was an opportunity that the AIS community as a whole could’ve/should’ve taken advantage of more.” Like, Congressional staffers were (and I think are still) extremely interested in engaging with people about AI– it’s hard to imagine a better opportunity for folks in the AIS community to be able to come in and serve as advisors/advocates.
That makes sense.
How might one be able to start engaging with Congressional staffers? What should one do to get into that space/what orgs might be well-positioned to deploy people for this?
This is going to be a pretty vague answer, but I think it depends a lot on the person, their skills, and their policy objectives.
Also– and I mentioned this above but it’s important to reiterate– there are definitely risks from people doing this kind of work poorly. On the other hand, there’s a risk of being too “inaction-biased”, or something like that, and leaving a lot of value on the table.
This is genuinely hard and confusing. I mentioned earlier that I consulted 10-20 AI governance people. Most of them were like “this seems important and neglected, but idk man, seems confusing.” Several of them were like “yeah, I totally think you should do this, especially if you employ XYZ tactics.” One fairly prominent AI governance person told me explicitly that they did not want me to do this. I found it hard to balance this conflicting feedback.
I also think a lot of my advice would depend on what exactly someone wants to say– like, for example:
What is their pitch going to be? If a meeting starts and the staffer says “so, what did you want to talk about?”, what’s the initial response?
Are they the kind of person that is good at asking questions and being curious about other people’s worldviews?
Are they going to sound alarmist?
Do they know a lot of facts about AI? When they don’t know something, are they going to be able to recognize that and hedge appropriately?
With all that in mind, if someone reading this was interested in engaging with Congressional staffers (or having someone in their org do this), and they valued my opinions, I recommend they reach out to me through LW. I’d be able to provide better advice with more context.
A typical meeting
Yeah, that all makes sense. Could you walk me through a typical meeting you might have had? Like, how would you first get in contact with a staffer, where would you meet them, what did the actual conversation look like, how would you follow up or otherwise figure out whether it was helpful?
Meeting Logistics
Would get in contact via email
Would usually meet them at the congressional office (there are basically 4 main buildings in DC that have all the congressional offices) or via Zoom
How meetings went
The conversation would typically start by me asking if they had any questions about AI or wanted me to share stuff I was working on. Usually, they wanted me to start.
I would start by introducing myself and CAIS. Once the CAIS statement came out, I would reference the CAIS statement. I’d tell them that I was focusing on global security risks from advanced AI. I’d also tell them that I was working on an NTIA response, and I’d tell them about some of the high-level ideas I was considering.
Then, I’d pause to see if they had any questions.
Often, they’d either ask more about extinction risks, or they’d ask miscellaneous questions about AI (e.g., do you have any thoughts on how to handle deepfakes?), or they’d bring up some high-level questions about regulation (e.g., how do we regulate without stifling innovation? How do we regulate without losing to China?)
In some of the best meetings, I’d hear about some of the AI-related stuff that the office is working on. Most offices don’t have capacity/interest to take the lead on AI stuff. About 10% of offices were like “yea my congressperson is super interested in this and we’re thinking about introducing legislation or being a core part of someone else’s legislation.”
Lots of people asked me if I had draft legislation. Apparently, if you have regulatory ideas, people want to see that you have a (short) version of it written up like a bill.
Following-up
I sent a follow-up to everyone I met with once the NTIA request for comment response was finished. When I had an especially good meeting (e.g., a staffer expressed strong interest in AI risks or told me that they wanted to send me something they were working on), I would send a personalized follow-up. I think the most tangible sign of helpfulness came from instances when people continued to send me questions/thoughts, introduced me to colleagues, or wanted to work with me on proposals. (To be clear, this happened in a minority of cases, but I think it’s where the majority of the impact came from).
Staffer attitudes toward AI risk
Lots of people asked me if I had draft legislation.
What kind of issues were they looking for legislation on? Any bits of legislation that you suggested?
Staffers often wanted to know if I had draft legislation to describe the licensing regime that I was writing about in my NTIA response (I didn’t have draft legislation but later contributed to drafting legislation when I was helping Thomas get the Center for AI Policy off the ground.)
Ah, okay. More generally, what kinds of priors did people have on AI risk? Do you think you typically caused a significant change in how they approached the topic?
It seemed like most people didn’t have strong priors about AI risk. I was expecting peoples’ priors to be more skeptical (like “what? the end of the world? Really?”). But I think a lot of people were like “yeah, I can totally see how AI could cause global security risks” or even “yeah, I’m actually quite worried about SkyNet-like AIs, and I’m glad other people are working on that.”
Often, people seemed genuinely concerned about extinction risks from AI but also didn’t have any plans to work on it. I was reminded that it’s actually a very EA thing to be like “X is an existential risk” --> “Therefore I should seriously consider working on X.” A lot of people were just like “I’m glad someone else is thinking about this [but I’m not going to and I don’t expect my congressperson will].”
In terms of my effect– I think I mostly just got them to think about it more and raised it in their internal “AI policy priorities” list. I think people forget that staffers have like 100 things on their priority list, so merely exposing and re-exposing them to these ideas can be helpful.
I also met a few staffers who seemed to care a lot about AI risks and who seemed like strong allies in the AI policy space. I’m still in touch with several, and I introduced a bunch of them to Thomas when he was starting the Center for AI Policy. If there’s a bill that I want to see passed, I think I have a much better understanding of which specific people I’d try to get in touch with. It seems likely to me that most of the impact of the entire DC trip was in figuring out who the allied staffers are & forming initial relationships with them.
One final thing is that I typically didn’t emphasize loss of control//superintelligence//recursive self-improvement. I didn’t hide it, but I included it in a longer list of threat models, and it was rarely the main thing I was trying to convey. If I was doing this over again, I would probably emphasize these threat models more.
It seems likely to me that most of the impact of the entire DC trip was in figuring out who the allied staffers are & forming initial relationships with them.
Ah, okay! Any traits that were good predictors on whether staffers would be sympathetic to the cause? E.g. specific regions, political leaning, other policies.
Any traits that were good predictors on whether staffers would be sympathetic to our cause?
Not really. The sample size is pretty small. Like, in total, there were probably ~4 staffers who I would put in the “cares a lot about extinction risk & in a position where they could be pretty helpful in moving forward legislation”. 1 Republican and 3 Democrats.
Ah, gotcha. Did the discussions you were having (before the CAIS statement came out) affect anything (phrasing, outreach, etc.) regarding the statement?
The discussions didn’t affect the statement; the statement was written before I left for DC. (Fun fact: I was one of the people involved in drafting the CAIS statement. It is a bit weird to think that contributing to a good sentence was like >100X more impactful than many other things I do, but hey, sometimes it works out that way).
(Fun fact: I was one of the people involved in drafting the CAIS statement. It is a bit weird to think that contributing to a good sentence was like >100X more impactful than many other things I do, but hey, sometimes it works out that way).
Damn. Weird world we live in. Well done on that, by the way.
Thanks! It has definitely been weird to see how much the statement mattered.
I think it was also pretty humbling– when I first heard about the statement (at the time we were calling it an open letter), I remember being all doomy and being like “meh, what is an open letter going to do? We already had the FLI Pause letter.”
This was a useful reminder that sometimes you might not be able to predict the impact of something in advance. In hindsight, it’s pretty clear (at least to me) that the CAIS statement was useful & the theory-of-change is very solid. But at the time, it didn’t feel like a backchained master plan. It felt like it was just one project on a list of like 20 projects, and it had a kind-of-fuzzy theory-of-change, and it was just another bet that seemed worth taking.
Lessons Learned
Anything you would do differently if you were going about this process again? What were the main things that surprised you/that you feel like you’ve learned from this?
I think I would’ve written up a doc that explained my reasoning, documented the people I consulted with, documented the upside and downside risks I was aware of, and sent it out to some EAs. I think some rumors spread that this was done in a rather unilateralist way. This has been tricky & has made me sad. I don’t think the way I did this was actually unilateralist, but I think it would’ve been even better to avoid misunderstandings by having my reasoning in writing. Thomas did this a bunch with CAIP and served as a good model for something like “how to take actions under uncertainty while doing so with reasoning transparency and high coordination”.
I also think I would’ve come with draft legislation (assuming the organization I was with was comfortable with that). It seems like people take you more seriously if you have draft legislation.
I also would’ve written a much shorter NTIA response– we ended up writing a paper that was like 20+ pages. I would’ve optimized much more for having shorter materials.
Ah, speaking of which, I would’ve come with a printed-out 1-pager that explained what CAIS is & summarized the regulatory ideas in the NTIA response. I ended up doing this halfway through, and I would’ve done this sooner.
Also, I would’ve come with business cards. People seem to love business cards!
Yeah, that all makes sense, though I definitely wouldn’t have guessed it in advance.
Final Takes
I believe I’ve run out of questions to ask: do you have anything else you want to say? Feel free to ramble.
Here are a few miscellaneous takes:
My experience in DC made me think that the Overton Window is extremely wide. Congress does not have cached takes on AI policy, and it seems like a lot of people genuinely want to learn. It’s unclear how long this will last (e.g., maybe AI risk ends up getting polarized), but we seem to be in a period of unusually high open-mindedness & curiosity.
However, it’s also very hard to get Congress to do anything. Like, for rather boring reasons, not many bills get passed. There are a bunch of steps in the process where bills can die. This is even more true when things need to be bipartisan (which currently they do, since we have a Democratic Senate and a Republican House). This mostly updates me toward “wow, the status quo usually results in nothing happening, and there’s a lot of work that will need to be done to get any kind of meaningful legislation.” With that in mind, I do think we’re in a pretty unique situation with AI safety (there aren’t that many things that actually pose extinction risks and all sorts of other catastrophic risks; also there aren’t that many things that become a priority for the Senate Majority Leader, inspire international summits with world leaders, or become the focus of entire Executive Orders).
Many people are overestimating the amount of “inside game” in DC, especially when it comes to congressional engagement. There are a few sort-of-secret things happening, but for the most part, I don’t think anyone has the ball.
I want to see more coordination around specific policy visions. For a while, you were in the Cool Kids Club simply for caring about xrisk. I think the Overton Window has moved a bunch, and we’re at the point where it’s no longer enough to “care about xrisk.” What matters is what specific policies people support & are willing to advocate for.
With that in mind, I also think there are benefits to having the broader AI risk community. Poorly-implemented coordination can lead to nothing getting done because you never get consensus (which currently favors the leading labs and unregulated scaling). Too little coordination can lead to lack of coalition-building and unnecessary conflicts. I think I’ve moved from “coordination is good” to “coordination is good when done properly but it actually takes skill and tact and effort to do coordination well.”
I generally think more people should be writing their views up publicly. It is hard to coordinate when I don’t know what people believe. I think the community should be less willing to praise people who have not come out with any particular stances.
I learned a lot about the DC AI safety community (by “AI safety community”, I’m referring mostly to folks who are doing AI safety work motivated by a desire to avert xrisk or societal-scale catastrophes. Some identify as EAs/longtermists, but many don’t)
TLDR: It’s complicated. I think the top 10% of thinkers were quite talented and pursuing reasonable theories of change. On the other hand, there were also a lot of people who claim to be interested in AI policy but who don’t have a basic understanding of various AI safety threat models. There are also (genuine and reasonable) fears that socially-inept and politically-incompetent newcomers might enter the space in ways that threaten or weaken existing efforts.
On balance, I felt like the dominant culture was too dismissive of new policy efforts. I hope this changes as the AI policy conversation continues to move forward and attract new groups of people. I would be excited about a community that had a reaction more like “ah, new people are interested! Let’s give you some tips/pointers and point out specific experiences we’ve had and discuss concrete models of downside risks.” The status quo often felt less concrete and (in my opinion) overly protectionistic toward new efforts. I found that the culture made it harder for me to think clearly or perform advocacy, especially what I might call “high-directness advocacy” (where you are EG trying predominantly to convey your internal world-state to people, as opposed to trying predominantly to convey a set of beliefs that will land well with your audience). I think there are serious debates to be had about how “direct” various advocacy efforts should be (and I think some DC folks would actually lose some of their influence/”seriousness points” if they were fully direct), but I was still surprised at the magnitude of the effect– the degree to which the culture seemed to disincentivize me and my peers from being direct. I believe this culture has slowed down new policy efforts considerably, and continues to threaten/weaken/stymy new policy efforts in ways that I think are bad for the world. As with many things, I think the high-level concerns are correct, but there are issues in exactly how these high-level concerns are applied/implemented
It’s also difficult to evaluate the track record of various people/plans. This is partly because some information is secret, partly because things like “we have good relationships with important stakeholders” is a useful instrumental step but doesn’t necessarily translate into impact, and partly because a lot of theories of change are hits-based and take time to yield direct impact (e.g., if someone has developed good relationships with X, maybe at some point X will become extremely relevant for AI regulation, but maybe there’s only a 1-10% chance of that being true.) With that said, I think coordination would be easier if people ended up being more explicit about what they believe, more explicit about specific policy goals they are hoping to achieve, and more explicit about their legible wins (and losses). In the absence of this, we run the risk of giving too much power and too many resources to people who “play the game”, develop influence, but don’t end up using their influence to achieve meaningful change. (see also this Dominic Cummings podcast).
Relatedly, this comment from Oliver Habryka resonates with me a lot. I have found that I often think more clearly when I get some distance from “mainstream EAs.” There are a lot of antibodies and subtle cultural pressures that can prevent me from thinking about certain ideas and can atrophy my ability to take directed action in the world. (Of course, I don’t think the solution is “never interact with EAs”– but I do think that people are likely underestimating the negative effects the community has on thinking well & achieving difficult things. I certainly was.)
For people interested in donating, I currently recommend the Center for AI Policy ( especially insofar as Thomas Larsen continues to be highly involved in its strategic direction). I have some strategic/tactical disagreements with Thomas, but I think he is an extremely intelligent and talented person, and I think he is one of the best newcomers in the AI policy space to support (COI: Thomas is one of my friends and I was involved in helping the Center for AI Policy during its early stages).
If you want to talk to me, feel free to reach out on LessWrong. I enjoy talking to people who are working in AI policy. I’m also open to being pitched on impactful things that I could be doing or others I know could be doing.
Wow, okay. Thanks for doing this dialogue!
- The ‘Neglected Approaches’ Approach: AE Studio’s Alignment Agenda by Cameron Berg (18 Dec 2023 20:35 UTC; 166 points)
- What I Would Do If I Were Working On AI Governance by johnswentworth (8 Dec 2023 6:43 UTC; 109 points)
- AI #42: The Wrong Answer by Zvi (14 Dec 2023 14:50 UTC; 67 points)
- List of projects that seem impactful for AI Governance by JaimeRV (EA Forum; 14 Jan 2024 16:52 UTC; 35 points)
- The ‘Neglected Approaches’ Approach: AE Studio’s Alignment Agenda by Marc Carauleanu (EA Forum; 18 Dec 2023 21:13 UTC; 21 points)
- List of projects that seem impactful for AI Governance by JaimeRV (14 Jan 2024 16:53 UTC; 13 points)
- Mo Putera's comment on Yanni Kyriacos’s Quick takes by yanni kyriacos (EA Forum; 28 Feb 2024 18:58 UTC; 4 points)
Before I start criticizing, I would make it clear that I’m grateful for your work and I could not do better myself; I certainly did try, in fact I was one of the first in DC in 2018, but I could not do well since I was one of the many “socially-inept” people which are in fact a serious problem in DC (for the record: if you want to do AI policy, do not move to DC, first visit and have the people there judge your personality/charisma, the standards might be far higher or far lower than you might expect, they are now much better at testing people for fit than when I started 6 years ago).
I’m also grateful to see you put your work out there for review on Lesswrong, rather than staying quiet. I think the decision to attempt to be open vs. closed about AI policy work is vastly more complicated than most people in AI policy believe.
Your post is fantastic, especially the reflections.
You never mentioned the words “committee” or “chair” in this post. ?????????? Everything in Congress, other than elections and constituent calls, revolves around the congressional committees and particularly their chairmen. Is your model that congressional committees, just, aren’t important at all relative to party leadership in each chamber? If the balance of power has shifted that far by now, I wouldn’t know. Either way, congress is very much the kind of place where 20% of the congressmembers have >80% of the power, and the ones in the bottom 50% are the easiest to talk to, and their staffers exist to look important and talk to as many people as possible per day and make them feel heard, and their offices are focused on maintaining the appearance of being capable of substantially influencing legislation, in order to mitigate the risk that their voters and their professional network find out that they are in the bottom 50%. Over the centuries, Congress has become incredibly sophisticated at constructing mazes and leading people around. The Committee system is the first step to cutting through that and getting to where the bills are actually getting negotiated and written (primarily by lobbyists with de-facto personal ties to the office of the Chairs of the relevant committees and maybe the deputy chairs). Unless I’m wrong about this e.g. maybe a way larger share of policymaking power has accrued to the party leadership, who are even harder to meet with, or maybe the lobbyists from the big 5 tech companies are the main hotspot for tech-related policymaking in general including AI, and they meet with whoever they want, making the committee structure not very relevant to AI policy. It would have been great to hear more about the people you met at think tanks and the executive branch.
When it comes to foreign policy, which might be pretty important, a helpful way to look at it is like congress and other parliamentary bodies acting as a wall between domestic elites and the foreign policymaking institutions, and intelligence agencies as the main holders of real power. Obviously, these are people, and backdoor deals and revolving door employment is everywhere, so even this wall is fuzzy. But it is much more robust than, say, domestic policy (e.g. farm bills) where congress basically acts the conduit between elites and policy (e.g. like how most of the actual lawmaking work on capitol hill is done by lobbyists, not staffers). Intelligence agencies can easily bribe or infiltrate parliaments, and parliaments cannot easily bribe or infiltrate intelligence agencies. Authoritarian countries like China, on the other hand, don’t have real parliaments, and the strongman leader must mitigate the creep of rich domestic elites seeking policymaking influence (in reality it’s much more complex e.g. hybrid regimes, redirecting domestic elites to focus on local/provincial governments instead of the central/national government, etc your book might help with this but it’s important to note that books about intelligence agencies are products that need to optimize for entertainment in order to sell copies; books must be recommended by personal connections, and even then you never know, I might read and trust a book recommended to me by someone like Jason Matheny).
They want you to propose solutions, they get annoyed when people come to them with a new issue they know nothing about and expect them to be the one to think of solutions. They want you to do the work writing up the final product and then hand it to them. If they have any issue with it, they’ll rewrite parts of it or throw it in the recycle bin.
I’ve heard this characterized as “goldfish memory”. It’s important to note that many of the other 100 things on their priority list also have people trying to “expose and re-expose” them to ideas, and many staffers are hired for skill at pretending that they’re listening. I think you were correct to evaluate your work building relationships as more useful than this.
I disagree that the Overton window in DC, or even Congress, is as wide as your impression. This is both for the reasons stated above, and because it seems very likely (>95%) that military-adjacent people in both the US and China are actively pursuing AI for things like economic growth/stabilization, military applications like EW and nuclear-armed cruise missiles, or for the data processing required for modern information warfare. I agree that we seem to be in a period of unusually high open-mindedness and curiosity.
I think that DC is a very Moloch-infested place, resulting in an intense and pervasive culture of nihilism- a near-universal belief that Moloch is inevitable. Prolonged exposure to that environment (several years), where everyone around you thinks this way, and will permanently mark you as low-social-status if you ever reveal you are one of those people with hope for the world, likely (>90%) has intense psychological effects on the AI Safety people in DC.
Likewise, the best people will know the risks associated with having important conversations near smartphones in a world where people use AI for data science, but they don’t know you well enough to know whether you yourself will proceed to have important conversations about them near smartphones. They can’t heart-to-heart with you about the problem, because that would turn that conversation into an important one, and it would be near a smartphone.
internally screaming
If you ever decide to write a doc properly explaining the situation with AI Safety to policymakers who read it, Scott Alexander’s Superintelligence FAQ is considered in high esteem, you could probably read it, think about how/why it was good at giving laymen a fair chance to understand the situation, and write a much shorter 1-pager yourself that’s optimized for the particular audience. I convinced both of my ~60-year-old parents to take AI safety seriously by asking them to read the AI chapter in Toby Ord’s The Precipice, so you might consider that instead.
Thanks for all of this! Here’s a response to your point about committees.
I agree that the committee process is extremely important. It’s especially important if you’re trying to push forward specific legislation.
For people who aren’t familiar with committees or why they’re important, here’s a quick summary of my current understanding (there may be a few mistakes):
When a bill gets introduced in the House or the Senate, it gets sent to a committee. The decision is made by the Speaker of the House or the priding officer in the Senate. In practice, however, they often defer to a non-partisan “parliamentarian” who specializes in figuring out which committee would be most appropriate. My impression is that this process is actually pretty legitimate and non-partisan in most cases(?).
It takes some degree of skill to be able to predict which committee(s) a bill is most likely to be referred to. Some bills are obvious (like an agriculture bill will go to an agriculture committee). In my opinion, artificial intelligence bills are often harder to predict. There is obviously no “AI committee”, and AI stuff can be argued to affect multiple areas. With all that in mind, I think it’s not too hard to narrow things down to ~1-3 likely committees in the House and ~1-3 likely committees in the Senate.
The most influential person in the committee is the committee chair. The committee chair is the highest-ranking member from the majority party (so in the House, all the committee chairs are currently Republicans; in the Senate, all the committee chairs are currently Democrats).
A bill cannot be brought to the House floor or the Senate floor (cannot be properly debated or voted on) until it has gone through committee. The committee is responsible for finalizing the text of the bill and then voting on whether or not they want the bill to advance to the chamber (House or Senate).
The committee chair typically has a lot of influence over the committee. The committee chair determines which bills get discussed in committee, for how long, etc. Also, committee chairs usually have a lot of “soft power”– members of Congress want to be in good standing with committee chairs. This means that committee chairs often have the ability to prevent certain legislation from getting out of committee.
If you’re trying to get legislation passed, it’s ideal to have the committee chair think favorably of that piece of legislation.
It’s also important to have at least one person on the committee as someone who is willing to “champion” the bill. This means they view the bill as a priority & be willing to say “hey, committee, I really think we should be talking about bill X.” A lot of bills die in committee because they were simply never prioritized.
If the committee chair brings the bill to a vote, and the majority of committee members vote in favor of the bill moving to the chamber, the bill can be discussed in the full chamber. Party leadership (Speaker of the House, Senate Majority Leader, etc.) typically play the most influential role in deciding which bills get discussed or voted on in the chambers.
Sometimes, bills get referred to multiple committees. This generally seems like “bad news” from the perspective of getting the bill passed, because it means that the bill has to get out of multiple committees. (Any single committee could essentially prevent the bill from being discussed in the chamber).
(If any readers are familiar with the committee process, please feel free to add more info or correct me if I’ve said anything inaccurate.)
Can you please explain what this means?
??? WTF do people “in AI governance” do?
Quick answer:
A lot of AI governance folks primarily do research. They rarely engage with policymakers directly, and they spend much of their time reading and writing papers.
This was even more true before the release of GPT-4 and the recent wave of interest in AI policy. Before GPT-4, many people believed “you will look weird/crazy if you talk to policymakers about AI extinction risk.” It’s unclear to me how true this was (in a genuine “I am confused about this & don’t think I have good models of this” way). Regardless, there has been an update toward talking to policymakers about AI risk now that AI risk is a bit more mainstream.
My own opinion is that, even after this update toward policymaker engagement, the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach. (Of course, the two can be complimentary, and the best outreach will often be done by people who have good models of what needs to be done & can present high-quality answers to the questions that policymakers have).
Among the people who do outreach/policymaker engagement, my impression is that there has been more focus on the executive branch (and less on Congress/congressional staffers). The main advantage is that the executive branch can get things done more quickly than Congress. The main disadvantage is that Congress is often required (or highly desired) to make “big things” happen (e.g., setting up a new agency or a licensing regime).
My prediction is that the AI safety community will overestimate the difficulty of policymaker engagement/outreach.
I think that the AI safety community has quickly and accurately taken social awkwardness and nerdiness into account, and factored that out of the equation. However, they will still overestimate the difficulty of policymaker outreach, on the basis that policymaker outreach requires substantially above-average sociability and personal charisma.
Even among the many non-nerd extroverts in the AI safety community, who have above average or well above average social skills (e.g. ~80th or 90th percentile), the ability to do well in policy requires an extreme combination of traits that produce intense charismatic competence, such the traits required for as a sense of humor near the level of a successful professional comedian (e.g. ~99th or 99.9th percentile). This is because the policy environment, like corporate executives, selects for charismatic extremity.
Because people who are introspective or think about science at all are very rarely far above the 90th percentile for charisma, even if only the obvious natural extroverts are taken into account, the AI safety community will overestimate the difficulty of policymaker outreach.
I don’t think they will underestimate the value of policymaker outreach (in fact I predict they are overestimating the value, due to the American interests in using AI for information warfare pushing AI decisionmaking towards inaccessible and inflexible parts of natsec agencies). But I do anticipate underestimating the feasibility of policymaker outreach.
I’m not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
I should have made it more clear at the beginning.
AI governance successfully filters out the nerdy people
They see that they’re still having a hard time finding their way to the policymakers with influence (e.g. what Akash was doing, meeting people in order to meet more people through them).
They conclude that the odds of success are something like ~30% or any other number.
I think that they would be off by something like 10, so it would actually be ~40%, because factoring out the nerds still leaves you with the people at the 90th percentile of Charisma and you need people at the 99th percentile. They might be able to procure those people.
This is because I predict that people at the 99th percentile of Charisma are underrepresented in AI safety, even if you only look at the non-nerds.
That makes sense and sounds sensible, at least pre-ChatGPT.
Modern congressional staffers are the product of Goodhart’s law; ~50-100 years ago, they were the ones that ran congress de-facto, so all the businessmen and voters wanted to talk to them, so the policymaking ended up moving elsewhere. Just like what happened with congressmen themselves ~100-150 years ago. Congressional staffers today primarily take constituent calls from voters, and make interest groups think they’re being listened to. Akash’s accomplishments came from wading through that bullshit, meeting people through people until he managed to find some gems.
Most policymaking today is called in from outside, with lobbyists having the domain-expertise needed to write the bills, and senior congressional staffers (like the legislative directors and legislative assistants here) overseeing the process, usually without getting very picky about the details.
It’s not like congressmembers have no power, but they’re just one part of what’s called an “Iron triangle”, the congressional lawmakers, the executive branch bureaucracies (e.g. FDA, CDC, DoD, NSA), and the private sector companies (e.g. Walmart, Lockheed, Microsoft, Comcast), with the lobbyists circulating around the three, negotiating and cutting deals between them. It’s incredibly corrupt and always has been, but not all-crushingly corrupt like African governments. It’s like the Military Industrial Complex, except that’s actually a bad example because congress is increasingly out of the loop de-facto on foreign policy (most structures are idiosyncratic, because the fundamental building block is people who are thinking of ways to negotiate backdoor deals).
People in the executive branch/bureaucracies like the DoD have more power on interesting things like foreign policy, Congress is more powerful for things that have been entrenched for decades like farming policy. Think tank people have no power but they’re much less stupid and have domain expertise and are often called up to help write bills instead of lobbyists.
I don’t know how AI policy is made in Congress, I jumped ship from domestic AI policy to foreign AI policy 3.5 years ago in order to focus more on the incentives from the US-China angle, Akash is the one to ask about where AI policymaking happens in congress, as he was the one actually there deep in the maze (maybe via DM because he didn’t describe it in this post).
I strongly recommend people talking to John Wentworth about AI policy, even if he doesn’t know much at first; after looking at Wentworth’s OpenAI dialog, he’s currently my top predicted candidate for “person who starts spending 2 hours a week thinking about AI policy instead of technical alignment, and thinks up galaxy brained solutions that break the stalemates that vexed the AI policy people for years”.
Most don’t do policy at all. Many do research. Since you’re incredulous, here are some examples of great AI governance research (which don’t synergize much with talking to policymakers):
Towards best practices in AGI safety and governance
Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring
Survey on intermediate goals in AI governance
I mean, those are all decent projects, but I would call zero of them “great”. Like, the whole appeal of governance as an approach to AI safety is that it’s (supposed to be) bottlenecked mainly on execution, not on research. None of the projects you list sound like they’re addressing an actual rate-limiting step to useful AI governance.
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
(Also note that lots of “governance” research is ultimately aimed at helping labs improve their own safety. Central example: Structured access.)
Did that change because people finally finished doing enough basic strategy research to know what policies to ask for?
It didn’t seem like that to me. Instead, my impression was that it was largely triggered by ChatGPT and GPT4 making the topic more salient, and AI safety feeling more inside the Overton window. So there were suddenly a bunch of government people asking for concrete policy suggestions.
Yeah, that’s Luke Muehlhauser’s claim; see the first paragraph of the linked piece.
I mostly agree with him. I wasn’t doing AI governance years ago but my impression is they didn’t have many/good policy asks. I’d be interested in counterevidence — like pre-2022 (collections of) good policy asks.
Anecdotally, I think I know one AI safety person who was doing influence-seeking-in-government and was on a good track but quit (to do research) because they weren’t able to leverage their influence because the AI governance community didn’t really have asks for (the US federal) government.
My own model differs a bit from Zach’s. It seems to me like most of the publicly-available policy proposals have not gotten much more concrete. It feels a lot more like people were motivated to share existing thoughts, as opposed to people having new thoughts or having more concrete thoughts.
Luke’s list, for example, is more of a “list of high-level ideas” than a “list of concrete policy proposals.” It has things like “licensing” and “information security requirements”– it’s not an actual bill or set of requirements. (And to be clear, I still like Luke’s post and it’s clear that he wasn’t trying to be super concrete).
I’d be excited for people to take policy ideas and concretize them further.
Aside: When I say “concrete” in this context, I don’t quite mean “people on LW would think this is specific.” I mean “this is closer to bill text, text of a section of an executive order, text of an amendment to a bill, text of an international treaty, etc.”
I think there are a lot of reasons why we haven’t seen much “concrete policy stuff”. Here are a few:
This work is just very difficult– it’s much easier to hide behind vagueness when you’re writing an academic-style paper than when you’re writing a concrete policy proposal.
This work requires people to express themselves with more certainty/concreteness than academic-style research. In a paper, you can avoid giving concrete recommendations, or you can give a recommendation and then immediately mention 3-5 crucial considerations that could change the calculus. In bills, you basically just say “here is what’s going to happen” and do much less “and here are the assumptions that go into this and a bunch of ways this could be wrong.”
This work forces people to engage with questions that are less “intellectually interesting” to many people (e.g., which government agency should be tasked with X, how exactly are we going to operationalize Y?)
This work just has a different “vibe” to the more LW-style research and the more academic-style research. Insofar as LW readers are selected for (and reinforced for) liking a certain “kind” of thinking/writing, this “kind” of thinking/writing is different than the concrete policy vibe in a bunch of hard-to-articulate ways.
This work often has the potential to be more consequential than academic-style research. There are clear downsides of developing [and advocating for] concrete policies that are bad. Without any gatekeeping, you might have a bunch of newbies writing flawed bills. With excessive gatekeeping, you might create a culture that disincentivizes intelligent people from writing good bills. (And my own subjective impression is that the community erred too far on the latter side, but I think reasonable people could disagree here).
For people interested in developing the kinds of proposals I’m talking about, I’d be happy to chat. I’m aware of a couple of groups doing the kind of policy thinking that I would consider “concrete”, and it’s quite plausible that we’ll see more groups shift toward this over time.
Curated. I liked that this post had a lot of object-level detail about a process that is usually opaque to outsiders, and that the “Lessons Learned” section was also grounded enough that someone reading this post might actually be able to skip “learning from experience”, at least for a few possible issues that might come up if one tried to do this sort of thing.
It’s great to see this being publicly posted!
Would you recommend “The Devil’s Chessboard”? It seems intriguing, yet it makes substantial claims with scant evidence.
In my opinion, intelligence information often leads to exaggerated stories unless it is anchored in public information, leaked documents, and numerous high-quality sources.
I’d be very interested to see that longer threat model list!
If memory serves me well, I was informed by Hendrycks’ overview of catastrophic risks. I don’t think it’s a perfect categorization, but I think it does a good job laying out some risks that feel “less speculative” (e.g., malicious use, race dynamics as a risk factor that could cause all sorts of threats) while including those that have been painted as “more speculative” (e.g., rogue AIs).
I’ve updated toward the importance of explaining & emphasizing risks from sudden improvements in AI capabilities, AIs that can automate AI research, and intelligence explosions. I also think there’s more appetite for that now than there used to be.
This hit me like a breath of fresh air. “Antibodies” yes. Makes me feel less alone in my world-space
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?