What about major governments1 - what can they be doing today to help?
I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.
However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.
I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.
I think governments are “stickier” than companies—I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.
I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2)
(Click to expand) The “competition” frame vs. the “caution” frame”
In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:
The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.
Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:
Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.
Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity
The “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who’s making the key decisions as it happens.
If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.
In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.
This means it could matter enormously “who leads the way on transformative AI”—which country or countries, which people or organizations.
Some people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:
Increasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.
Supporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)
Tension between the two frames. People who take the “caution” frame and people who take the “competition” frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
For example, people in the “competition” frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the “caution” frame, haste is one of the main things to avoid. People in the “competition” frame often favor adversarial foreign relations, while people in the “caution” frame often want foreign relations to be more cooperative.
That said, this dichotomy is a simplification. Many people—including myself—resonate with both frames. But I have a general fear that the “competition” frame is going to be overrated by default for a number of reasons, as I discuss here.
Because of these concerns, I don’t have a ton of tangible suggestions for governments as of now. But here are a few.
My first suggestion is to avoid premature actions, including ramping up research on how to make AI systems more capable.
My next suggestion is to build up the right sort of personnel and expertise for challenging future decisions.
Today, my impression is that there are relatively few people in government who are seriously considering the highest-stakes risks and thoughtfully balancing both “caution” and “competition” considerations (see directly above). I think it would be great if that changed.
Governments can invest in efforts toeducate their personnel about these issues, and can try to hire key personnel who are already on the knowledgeable and thoughtful side about them (while also watching out for some of the pitfalls of spreading messages about AI).
Another suggestion is to generally avoid putting terrible people in power. Voters can help with this!
My top non-”meta” suggestion for a given government is to invest in intelligence on the state of AI capabilities in other countries. If other countries are getting close to deploying dangerous AI systems, this could be essential to know; if they aren’t, that could be essential to know as well, in order to avoid premature and paranoid racing to deploy powerful AI.
A few other things that seem worth doing and relatively low-downside:
Fund alignment research (ideally alignment research targeted at the most crucial challenges) via agencies like the National Science Foundation and DARPA. These agencies have huge budgets (the two of them combined spend over $10 billion per year), and have major impacts on research communities.
Keep options open for future monitoring and regulation (see this Slow Boring piece for an example).
Build relationships with leading AI researchers and organizations, so that future crises can be handled relatively smoothly.
Encourage and amplify investments in information security. My impression is that governments are often better than companies at highly advanced information security (preventing cyber-theft even by determined, well-resourced opponents). They could help with, and even enforce, strong security at key AI companies.
Footnotes
I’m centrally thinking of the US, but other governments with lots of geopolitical sway and/or major AI projects in their jurisdiction could have similar impacts. ↩
When discussing recommendations for companies, I imagine companies that are already dedicated to AI, and I imagine individuals at those companies who can have a large impact on the decisions they make.
By contrast, when discussing recommendations for governments, a lot of what I’m thinking is: “Attempts to promote productive actions on AI will raise the profile of AI relative to other issues the government could be focused on; furthermore, it’s much harder for even a very influential individual to predict how their actions will affect what a government ultimately does, compared to a company.” ↩
How major governments can help with the most important century
Link post
I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.
What about major governments1 - what can they be doing today to help?
I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.
However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.
I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.
I think governments are “stickier” than companies—I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.
I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2)
(Click to expand) The “competition” frame vs. the “caution” frame”
In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:
The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.
Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:
Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.
Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity
The “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who’s making the key decisions as it happens.
If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.
In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.
This means it could matter enormously “who leads the way on transformative AI”—which country or countries, which people or organizations.
Some people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:
Increasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.
Supporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)
Tension between the two frames. People who take the “caution” frame and people who take the “competition” frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
For example, people in the “competition” frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the “caution” frame, haste is one of the main things to avoid. People in the “competition” frame often favor adversarial foreign relations, while people in the “caution” frame often want foreign relations to be more cooperative.
That said, this dichotomy is a simplification. Many people—including myself—resonate with both frames. But I have a general fear that the “competition” frame is going to be overrated by default for a number of reasons, as I discuss here.
Because of these concerns, I don’t have a ton of tangible suggestions for governments as of now. But here are a few.
My first suggestion is to avoid premature actions, including ramping up research on how to make AI systems more capable.
My next suggestion is to build up the right sort of personnel and expertise for challenging future decisions.
Today, my impression is that there are relatively few people in government who are seriously considering the highest-stakes risks and thoughtfully balancing both “caution” and “competition” considerations (see directly above). I think it would be great if that changed.
Governments can invest in efforts toeducate their personnel about these issues, and can try to hire key personnel who are already on the knowledgeable and thoughtful side about them (while also watching out for some of the pitfalls of spreading messages about AI).
Another suggestion is to generally avoid putting terrible people in power. Voters can help with this!
My top non-”meta” suggestion for a given government is to invest in intelligence on the state of AI capabilities in other countries. If other countries are getting close to deploying dangerous AI systems, this could be essential to know; if they aren’t, that could be essential to know as well, in order to avoid premature and paranoid racing to deploy powerful AI.
A few other things that seem worth doing and relatively low-downside:
Fund alignment research (ideally alignment research targeted at the most crucial challenges) via agencies like the National Science Foundation and DARPA. These agencies have huge budgets (the two of them combined spend over $10 billion per year), and have major impacts on research communities.
Keep options open for future monitoring and regulation (see this Slow Boring piece for an example).
Build relationships with leading AI researchers and organizations, so that future crises can be handled relatively smoothly.
Encourage and amplify investments in information security. My impression is that governments are often better than companies at highly advanced information security (preventing cyber-theft even by determined, well-resourced opponents). They could help with, and even enforce, strong security at key AI companies.
Footnotes
I’m centrally thinking of the US, but other governments with lots of geopolitical sway and/or major AI projects in their jurisdiction could have similar impacts. ↩
When discussing recommendations for companies, I imagine companies that are already dedicated to AI, and I imagine individuals at those companies who can have a large impact on the decisions they make.
By contrast, when discussing recommendations for governments, a lot of what I’m thinking is: “Attempts to promote productive actions on AI will raise the profile of AI relative to other issues the government could be focused on; furthermore, it’s much harder for even a very influential individual to predict how their actions will affect what a government ultimately does, compared to a company.” ↩