Here are some AI governance/policy thoughts that I’ve found myself articulating at least 3 times over the last month or so:
I think people interested in AI governance/policy should divide their projects into “things that could be useful in the current Overton Window” and “things that would require a moderate or major Overton Window shift to be useful.” I think sometimes people end up not thinking concretely about which world they’re aiming for, and this makes their work less valuable.
If you’re aiming for the current Overton Window, you need to be brutally honest about what you can actually achieve. There are many barriers to implementing sensible-seeming ideas. You need access to stakeholders who can do something. You should try to fail quickly. If your idea requires buy-in from XYZ folks, and X isn’t interested, that’s worth figuring out ASAP.
If you’re aiming for something outside the current Overton Window, you often have a lot of room to be imaginative. I think it’s very easy to underestimate Overton Window shifts. If policymakers get considerably more concerned about AI risks, there are a lot of things that will “on the table”. People say that AI safety folks were unprepared for the chatGPT surge– if you think that there will be 1-2 more surges of interest, it might be worth explicitly preparing for ideas that would be considered in those surges.
I think it’s pretty essential to be in regular touch with policymakers/staffers if your main TOC is to get things done in the current Overton Window.
A common failure mode for “research types” is to write a 20+page paperand then ask “ok cool, which policymakers might be interested?” I think usually a better strategy is to try to get in touch with your target audience much earlier on in the process. Present the 1-2 page version of your idea and see if/where the nuance is useful. (To be clear, this is if your TOC involves directly influencing policy. This doesn’t apply if your main TOC is to improve everyone’s understanding of X topic or improve your own understanding of Y topic).
On the margin, I think more people who are new to AI governance/policy should be focusing on “things that would require a moderate or major Overton Window shift to be useful.” I think there’s more low-hanging fruit there that people can contribute to without necessarily having the kinds of networks/access that you often need to know what to do in the current Overton Window.
I think people tend to underestimate how quickly they could become a world expert in a specific area. This is especially true if you’re applying it to the intersection of two areas. For example, it’s very hard to become a world expert in international governance. But it’s relatively easier to become a world expert in the intersection of “international governance” and “AI safety”. There will be people who know more about international governance than you and people who know more about AI safety than you, but you might become one of the people who has thought the most rigorously about the intersection of the two topics.
I think I agree with much-to-all of this. One further amplification I’d make about the last point: the culture of DC policymaking is one where people are expected to be quick studies and it’s OK to be new to a topic; talent is much more funged from topic to topic in response to changing priorities than you’d expect. Your Lesswrong-informed outside view of how much you need to know on a topic to start commenting on policy ideas is probably wrong.
(Yes, I know, someone is about to say “but what if you are WRONG about the big idea given weird corner case X or second-order effects Y?” Look, reversed stupidity is not wisdom, but also also sometimes you can just quickly identify stupid-across-almost-all-possible-worlds ideas and convince people just not to do them rather than having to advocate for an explicit good-idea alternative.)
Here are some AI governance/policy thoughts that I’ve found myself articulating at least 3 times over the last month or so:
I think people interested in AI governance/policy should divide their projects into “things that could be useful in the current Overton Window” and “things that would require a moderate or major Overton Window shift to be useful.” I think sometimes people end up not thinking concretely about which world they’re aiming for, and this makes their work less valuable.
If you’re aiming for the current Overton Window, you need to be brutally honest about what you can actually achieve. There are many barriers to implementing sensible-seeming ideas. You need access to stakeholders who can do something. You should try to fail quickly. If your idea requires buy-in from XYZ folks, and X isn’t interested, that’s worth figuring out ASAP.
If you’re aiming for something outside the current Overton Window, you often have a lot of room to be imaginative. I think it’s very easy to underestimate Overton Window shifts. If policymakers get considerably more concerned about AI risks, there are a lot of things that will “on the table”. People say that AI safety folks were unprepared for the chatGPT surge– if you think that there will be 1-2 more surges of interest, it might be worth explicitly preparing for ideas that would be considered in those surges.
I think it’s pretty essential to be in regular touch with policymakers/staffers if your main TOC is to get things done in the current Overton Window.
A common failure mode for “research types” is to write a 20+page paper and then ask “ok cool, which policymakers might be interested?” I think usually a better strategy is to try to get in touch with your target audience much earlier on in the process. Present the 1-2 page version of your idea and see if/where the nuance is useful. (To be clear, this is if your TOC involves directly influencing policy. This doesn’t apply if your main TOC is to improve everyone’s understanding of X topic or improve your own understanding of Y topic).
On the margin, I think more people who are new to AI governance/policy should be focusing on “things that would require a moderate or major Overton Window shift to be useful.” I think there’s more low-hanging fruit there that people can contribute to without necessarily having the kinds of networks/access that you often need to know what to do in the current Overton Window.
I think people tend to underestimate how quickly they could become a world expert in a specific area. This is especially true if you’re applying it to the intersection of two areas. For example, it’s very hard to become a world expert in international governance. But it’s relatively easier to become a world expert in the intersection of “international governance” and “AI safety”. There will be people who know more about international governance than you and people who know more about AI safety than you, but you might become one of the people who has thought the most rigorously about the intersection of the two topics.
I think I agree with much-to-all of this. One further amplification I’d make about the last point: the culture of DC policymaking is one where people are expected to be quick studies and it’s OK to be new to a topic; talent is much more funged from topic to topic in response to changing priorities than you’d expect. Your Lesswrong-informed outside view of how much you need to know on a topic to start commenting on policy ideas is probably wrong.
(Yes, I know, someone is about to say “but what if you are WRONG about the big idea given weird corner case X or second-order effects Y?” Look, reversed stupidity is not wisdom, but also also sometimes you can just quickly identify stupid-across-almost-all-possible-worlds ideas and convince people just not to do them rather than having to advocate for an explicit good-idea alternative.)
I think how delicately you treat your personal Overton Window should also depend on your timelines.