the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach.
My prediction is that the AI safety community will overestimate the difficulty of policymaker engagement/outreach.
I think that the AI safety community has quickly and accurately taken social awkwardness and nerdiness into account, and factored that out of the equation. However, they will still overestimate the difficulty of policymaker outreach, on the basis that policymaker outreach requires substantially above-average sociability and personal charisma.
Even among the many non-nerd extroverts in the AI safety community, who have above average or well above average social skills (e.g. ~80th or 90th percentile), the ability to do well in policy requires an extreme combination of traits that produce intense charismatic competence, such the traits required for as a sense of humor near the level of a successful professional comedian (e.g. ~99th or 99.9th percentile). This is because the policy environment, like corporate executives, selects for charismatic extremity.
Because people who are introspective or think about science at all are very rarely far above the 90th percentile for charisma, even if only the obvious natural extroverts are taken into account, the AI safety community will overestimate the difficulty of policymaker outreach.
I’m not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
I should have made it more clear at the beginning.
AI governance successfully filters out the nerdy people
They see that they’re still having a hard time finding their way to the policymakers with influence (e.g. what Akash was doing, meeting people in order to meet more people through them).
They conclude that the odds of success are something like ~30% or any other number.
I think that they would be off by something like 10, so it would actually be ~40%, because factoring out the nerds still leaves you with the people at the 90th percentile of Charisma and you need people at the 99th percentile. They might be able to procure those people.
This is because I predict that people at the 99th percentile of Charisma are underrepresented in AI safety, even if you only look at the non-nerds.
My prediction is that the AI safety community will overestimate the difficulty of policymaker engagement/outreach.
I think that the AI safety community has quickly and accurately taken social awkwardness and nerdiness into account, and factored that out of the equation. However, they will still overestimate the difficulty of policymaker outreach, on the basis that policymaker outreach requires substantially above-average sociability and personal charisma.
Even among the many non-nerd extroverts in the AI safety community, who have above average or well above average social skills (e.g. ~80th or 90th percentile), the ability to do well in policy requires an extreme combination of traits that produce intense charismatic competence, such the traits required for as a sense of humor near the level of a successful professional comedian (e.g. ~99th or 99.9th percentile). This is because the policy environment, like corporate executives, selects for charismatic extremity.
Because people who are introspective or think about science at all are very rarely far above the 90th percentile for charisma, even if only the obvious natural extroverts are taken into account, the AI safety community will overestimate the difficulty of policymaker outreach.
I don’t think they will underestimate the value of policymaker outreach (in fact I predict they are overestimating the value, due to the American interests in using AI for information warfare pushing AI decisionmaking towards inaccessible and inflexible parts of natsec agencies). But I do anticipate underestimating the feasibility of policymaker outreach.
I’m not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
I should have made it more clear at the beginning.
AI governance successfully filters out the nerdy people
They see that they’re still having a hard time finding their way to the policymakers with influence (e.g. what Akash was doing, meeting people in order to meet more people through them).
They conclude that the odds of success are something like ~30% or any other number.
I think that they would be off by something like 10, so it would actually be ~40%, because factoring out the nerds still leaves you with the people at the 90th percentile of Charisma and you need people at the 99th percentile. They might be able to procure those people.
This is because I predict that people at the 99th percentile of Charisma are underrepresented in AI safety, even if you only look at the non-nerds.