A lot of AI governance folks primarily do research. They rarely engage with policymakers directly, and they spend much of their time reading and writing papers.
This was even more true before the release of GPT-4 and the recent wave of interest in AI policy. Before GPT-4, many people believed “you will look weird/crazy if you talk to policymakers about AI extinction risk.” It’s unclear to me how true this was (in a genuine “I am confused about this & don’t think I have good models of this” way). Regardless, there has been an update toward talking to policymakers about AI risk now that AI risk is a bit more mainstream.
My own opinion is that, even after this update toward policymaker engagement, the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach. (Of course, the two can be complimentary, and the best outreach will often be done by people who have good models of what needs to be done & can present high-quality answers to the questions that policymakers have).
Among the people who do outreach/policymaker engagement, my impression is that there has been more focus on the executive branch (and less on Congress/congressional staffers). The main advantage is that the executive branch can get things done more quickly than Congress. The main disadvantage is that Congress is often required (or highly desired) to make “big things” happen (e.g., setting up a new agency or a licensing regime).
the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach.
My prediction is that the AI safety community will overestimate the difficulty of policymaker engagement/outreach.
I think that the AI safety community has quickly and accurately taken social awkwardness and nerdiness into account, and factored that out of the equation. However, they will still overestimate the difficulty of policymaker outreach, on the basis that policymaker outreach requires substantially above-average sociability and personal charisma.
Even among the many non-nerd extroverts in the AI safety community, who have above average or well above average social skills (e.g. ~80th or 90th percentile), the ability to do well in policy requires an extreme combination of traits that produce intense charismatic competence, such the traits required for as a sense of humor near the level of a successful professional comedian (e.g. ~99th or 99.9th percentile). This is because the policy environment, like corporate executives, selects for charismatic extremity.
Because people who are introspective or think about science at all are very rarely far above the 90th percentile for charisma, even if only the obvious natural extroverts are taken into account, the AI safety community will overestimate the difficulty of policymaker outreach.
I’m not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
I should have made it more clear at the beginning.
AI governance successfully filters out the nerdy people
They see that they’re still having a hard time finding their way to the policymakers with influence (e.g. what Akash was doing, meeting people in order to meet more people through them).
They conclude that the odds of success are something like ~30% or any other number.
I think that they would be off by something like 10, so it would actually be ~40%, because factoring out the nerds still leaves you with the people at the 90th percentile of Charisma and you need people at the 99th percentile. They might be able to procure those people.
This is because I predict that people at the 99th percentile of Charisma are underrepresented in AI safety, even if you only look at the non-nerds.
Among the people who do outreach/policymaker engagement, my impression is that there has been more focus on the executive branch (and less on Congress/congressional staffers).
That makes sense and sounds sensible, at least pre-ChatGPT.
Quick answer:
A lot of AI governance folks primarily do research. They rarely engage with policymakers directly, and they spend much of their time reading and writing papers.
This was even more true before the release of GPT-4 and the recent wave of interest in AI policy. Before GPT-4, many people believed “you will look weird/crazy if you talk to policymakers about AI extinction risk.” It’s unclear to me how true this was (in a genuine “I am confused about this & don’t think I have good models of this” way). Regardless, there has been an update toward talking to policymakers about AI risk now that AI risk is a bit more mainstream.
My own opinion is that, even after this update toward policymaker engagement, the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach. (Of course, the two can be complimentary, and the best outreach will often be done by people who have good models of what needs to be done & can present high-quality answers to the questions that policymakers have).
Among the people who do outreach/policymaker engagement, my impression is that there has been more focus on the executive branch (and less on Congress/congressional staffers). The main advantage is that the executive branch can get things done more quickly than Congress. The main disadvantage is that Congress is often required (or highly desired) to make “big things” happen (e.g., setting up a new agency or a licensing regime).
My prediction is that the AI safety community will overestimate the difficulty of policymaker engagement/outreach.
I think that the AI safety community has quickly and accurately taken social awkwardness and nerdiness into account, and factored that out of the equation. However, they will still overestimate the difficulty of policymaker outreach, on the basis that policymaker outreach requires substantially above-average sociability and personal charisma.
Even among the many non-nerd extroverts in the AI safety community, who have above average or well above average social skills (e.g. ~80th or 90th percentile), the ability to do well in policy requires an extreme combination of traits that produce intense charismatic competence, such the traits required for as a sense of humor near the level of a successful professional comedian (e.g. ~99th or 99.9th percentile). This is because the policy environment, like corporate executives, selects for charismatic extremity.
Because people who are introspective or think about science at all are very rarely far above the 90th percentile for charisma, even if only the obvious natural extroverts are taken into account, the AI safety community will overestimate the difficulty of policymaker outreach.
I don’t think they will underestimate the value of policymaker outreach (in fact I predict they are overestimating the value, due to the American interests in using AI for information warfare pushing AI decisionmaking towards inaccessible and inflexible parts of natsec agencies). But I do anticipate underestimating the feasibility of policymaker outreach.
I’m not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
I should have made it more clear at the beginning.
AI governance successfully filters out the nerdy people
They see that they’re still having a hard time finding their way to the policymakers with influence (e.g. what Akash was doing, meeting people in order to meet more people through them).
They conclude that the odds of success are something like ~30% or any other number.
I think that they would be off by something like 10, so it would actually be ~40%, because factoring out the nerds still leaves you with the people at the 90th percentile of Charisma and you need people at the 99th percentile. They might be able to procure those people.
This is because I predict that people at the 99th percentile of Charisma are underrepresented in AI safety, even if you only look at the non-nerds.
That makes sense and sounds sensible, at least pre-ChatGPT.