The uneven distribution of the resources needed to produce and use AI in a state-based system is a long-term challenge to developing international AI policy and raises international security risks.
Resources include factors such as the skills, knowledge, compute, industry, people, and education and other factors of production used to build or develop AI systems, as well as access and ability to use AI systems themselves.
This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system.
The lack of resource distribution has a twofold problem:
There is a need for agreement on the distribution of AI resources. However, a wider diffusion of AI resources could increase the risk of misuse or AGI ruin leading to a possible reduction in diffusion.
A lack of diffusion could increase conventional security risks. For example, it would make sense if some nations create a powerful first strike capability to guard against or dissuade anyone who achieves AGI like capabilities in the aim of preventing them getting a runaway advantage.
The desire to access the economic and military benefits of AI will drive competition between states. Even if the benefits of AI development were evenly distributed, the places holding the greater number of AI resources will accrue disproportionate power over other geographies, particularly as AI moves toward the level of general intelligence.
The resources needed to develop and use AI cannot be evenly distributed in a zero-sum international system with layers of economic and security competition mixed with technical and research disparities. Attempting to evenly distribute resources may even have a negative impact on the development of AI if it makes research and innovation less effective.
International security and political economy competition:
There are some basic tenets to the concept of international security, drawing from the wider field of international relations.
Briefly:
The international system is anarchic with no supranational governance.
There is competition between states.
States analyse their security within the system of anarchy and the balance of power between states.
Given its anarchic nature, the international system is often seen as being adversarial by nature.
The analysis of the balance of power by states feeds into the way they develop military capabilities. These will be dependent on a state’s industrial and technological capabilities, as well as that of its allies. States are also in economic competition with each other. Each state has competing national systems of innovation, and therefore there is an unevenly spread set of competencies, specialisms and comparative advantages. The dual-use nature of AI will see competition for its development across both the economic and security domains. Under a state-based system individual states will seek to maintain a state of readiness based on perceived threats of other actors. Even through international cooperation around the development of AI, many states may at the very least want to attain a latent capacity and industrial-skills base to develop AI systems for their own security. A near concept is that of nuclear latency. These factors are a driver of AI innovation and competition as well as AI proliferation through commercial means.
The development and diffusion of technologies, like AI, has become a more salient topic in recent years through a focus on great power competition. This changes the relationship between state and private power. For example, the banning of exporting computer chips may make for good security policy, but conflicts with economic interests. It is extremely difficult to imagine a situation where a nuclear power agrees to share all resources and know-how to build a nuclear weapon with a state they regard as an adversary. Understanding why AI resource distribution would be different to existing international security concerns, such as nuclear, under the current adversarial structure of the global system should be a priority.
Geography of AI innovation:
Research and innovation is often placed based, with geographical concentrations of connected businesses and organisations, known as clusters. This is true of AI, where top end research is clustered across a small number of places. These clusters of expertise are not evenly spread across the world, nor are the benefits they produce. This is practically noticeable at the high-end of the field. While the number of people with AI skills are continuing to grow globally, the most advanced research takes place within a select few universities and companies. It is possible that the development of ever more powerful AI systems would increase the relative power of these clusters. One caveat to this argument is that the nature of science and innovation radically changes into something more distributed and decentralised. Factors that favour this include, the increased use of open-source, distributed teams and platforms, such as Github, and decentralised organisations, such as decentralised autonomous organisations and compute. However, States will have an incentive to limit the way this takes place in a way that skews the benefits toward them as a geographic entity. For example, they could enforce legal mechanisms over the ownership of research, ban exports and limit access to people and skills. Therefore, there is a good chance that the concentration of AI researchers and research clusters is not going to be spread more evenly over time even while the adoption and diffusion of AI increases.
Security and information hazards
The closer to the goal of AGI, actors involved may be less willing to share information about the development of AI less it causes an economic or security risk. In addition to this, there are information hazards around sharing AI progress.
Bostrom defines an information hazard as: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” The act of sharing information can constitute a security threat, hence the need for secrecy around certain forms of government information. Sharing AI resources may enable some agents to cause harm or increase risk to other participants in the international system. Therefore, States nearing the capacity for AGI will have an imperative to close down the amount of information they are willing to share with agents they deem likely to cause harm. This is likely to increase just at the point in which the need to share information to lessen any negative impacts of AGI also increases.
First strike to restrain AGI
If state A achieves AGI and an adversary thinks it will give state A an incomparable advantage which they won’t be able to overcome, then it is logical for state B to prevent this outcome or be subject to a world order dominated by state A.
If state B has no chance of achieving AGI then the closer A gets to achieving AGI the more it makes sense for B to build up its military capacity. This would allow state B to strike state state A, or to attempt to alleviate their AGI advantage through non-AGI means.
Conversely, there could be a need for state A to hide the development of AGI from state B to prevent it from pre-emptively striking if it fears state A has developed AGI or even just developments AI that gives it an incomprehensible advantage it cannot catch up with. This reduced transparency may reduce the overall capacity to prevent negative outcomes of AGI.
Without pre-agreed mechanisms to share AI resources, discontinuous and rapid advances could shorten any window of time that states could use to assess the relative security implications of AI advances. It could also make differential technology development impossible.
Summary:
AI development in a state-based system is a zero-sum game. Therefore, cooperation without a fair distribution of AI resources is likely to result in one party owning, or at least controlling access to AI resources within a certain geographical boundary—likely to be a state given the chance of the continuation of the current state-based system for the foreseeable future. I see this mostly continuing to be the case if 1) the global system is competitive by nature 2) there remains a security dilemma 3) unbridgeable differences between states in the international system exist, Washington vs Beijing consensus etc.
AI resources tend to be geographically clustered and therefore unevenly distributed. Along a path toward AGI those with access to these resources are likely to accumulate the most power within the international system.
Working toward AGI without a pre-agreed distribution of AI resources will be a destabilising event for the global order consisting of a state-based system of governance. The only counterpoint I can think to this is along the lines of the Waltz argument that all states should have nuclear weapons because it will reduce the risk of them being used, but this defaults back to increasing risks around informational hazards and the misuse of AI.
States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.
Issues with uneven AI resource distribution
Link post
Uneven resource distribution:
The uneven distribution of the resources needed to produce and use AI in a state-based system is a long-term challenge to developing international AI policy and raises international security risks.
Resources include factors such as the skills, knowledge, compute, industry, people, and education and other factors of production used to build or develop AI systems, as well as access and ability to use AI systems themselves.
This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system.
The lack of resource distribution has a twofold problem:
There is a need for agreement on the distribution of AI resources. However, a wider diffusion of AI resources could increase the risk of misuse or AGI ruin leading to a possible reduction in diffusion.
A lack of diffusion could increase conventional security risks. For example, it would make sense if some nations create a powerful first strike capability to guard against or dissuade anyone who achieves AGI like capabilities in the aim of preventing them getting a runaway advantage.
The desire to access the economic and military benefits of AI will drive competition between states. Even if the benefits of AI development were evenly distributed, the places holding the greater number of AI resources will accrue disproportionate power over other geographies, particularly as AI moves toward the level of general intelligence.
The resources needed to develop and use AI cannot be evenly distributed in a zero-sum international system with layers of economic and security competition mixed with technical and research disparities. Attempting to evenly distribute resources may even have a negative impact on the development of AI if it makes research and innovation less effective.
International security and political economy competition:
There are some basic tenets to the concept of international security, drawing from the wider field of international relations.
Briefly:
The international system is anarchic with no supranational governance.
There is competition between states.
States analyse their security within the system of anarchy and the balance of power between states.
Given its anarchic nature, the international system is often seen as being adversarial by nature.
The analysis of the balance of power by states feeds into the way they develop military capabilities. These will be dependent on a state’s industrial and technological capabilities, as well as that of its allies. States are also in economic competition with each other. Each state has competing national systems of innovation, and therefore there is an unevenly spread set of competencies, specialisms and comparative advantages. The dual-use nature of AI will see competition for its development across both the economic and security domains. Under a state-based system individual states will seek to maintain a state of readiness based on perceived threats of other actors. Even through international cooperation around the development of AI, many states may at the very least want to attain a latent capacity and industrial-skills base to develop AI systems for their own security. A near concept is that of nuclear latency. These factors are a driver of AI innovation and competition as well as AI proliferation through commercial means.
The development and diffusion of technologies, like AI, has become a more salient topic in recent years through a focus on great power competition. This changes the relationship between state and private power. For example, the banning of exporting computer chips may make for good security policy, but conflicts with economic interests. It is extremely difficult to imagine a situation where a nuclear power agrees to share all resources and know-how to build a nuclear weapon with a state they regard as an adversary. Understanding why AI resource distribution would be different to existing international security concerns, such as nuclear, under the current adversarial structure of the global system should be a priority.
Geography of AI innovation:
Research and innovation is often placed based, with geographical concentrations of connected businesses and organisations, known as clusters. This is true of AI, where top end research is clustered across a small number of places. These clusters of expertise are not evenly spread across the world, nor are the benefits they produce. This is practically noticeable at the high-end of the field. While the number of people with AI skills are continuing to grow globally, the most advanced research takes place within a select few universities and companies. It is possible that the development of ever more powerful AI systems would increase the relative power of these clusters. One caveat to this argument is that the nature of science and innovation radically changes into something more distributed and decentralised. Factors that favour this include, the increased use of open-source, distributed teams and platforms, such as Github, and decentralised organisations, such as decentralised autonomous organisations and compute. However, States will have an incentive to limit the way this takes place in a way that skews the benefits toward them as a geographic entity. For example, they could enforce legal mechanisms over the ownership of research, ban exports and limit access to people and skills. Therefore, there is a good chance that the concentration of AI researchers and research clusters is not going to be spread more evenly over time even while the adoption and diffusion of AI increases.
Security and information hazards
The closer to the goal of AGI, actors involved may be less willing to share information about the development of AI less it causes an economic or security risk. In addition to this, there are information hazards around sharing AI progress.
Bostrom defines an information hazard as: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” The act of sharing information can constitute a security threat, hence the need for secrecy around certain forms of government information. Sharing AI resources may enable some agents to cause harm or increase risk to other participants in the international system. Therefore, States nearing the capacity for AGI will have an imperative to close down the amount of information they are willing to share with agents they deem likely to cause harm. This is likely to increase just at the point in which the need to share information to lessen any negative impacts of AGI also increases.
First strike to restrain AGI
If state A achieves AGI and an adversary thinks it will give state A an incomparable advantage which they won’t be able to overcome, then it is logical for state B to prevent this outcome or be subject to a world order dominated by state A.
If state B has no chance of achieving AGI then the closer A gets to achieving AGI the more it makes sense for B to build up its military capacity. This would allow state B to strike state state A, or to attempt to alleviate their AGI advantage through non-AGI means.
Conversely, there could be a need for state A to hide the development of AGI from state B to prevent it from pre-emptively striking if it fears state A has developed AGI or even just developments AI that gives it an incomprehensible advantage it cannot catch up with. This reduced transparency may reduce the overall capacity to prevent negative outcomes of AGI.
Without pre-agreed mechanisms to share AI resources, discontinuous and rapid advances could shorten any window of time that states could use to assess the relative security implications of AI advances. It could also make differential technology development impossible.
Summary:
AI development in a state-based system is a zero-sum game. Therefore, cooperation without a fair distribution of AI resources is likely to result in one party owning, or at least controlling access to AI resources within a certain geographical boundary—likely to be a state given the chance of the continuation of the current state-based system for the foreseeable future. I see this mostly continuing to be the case if 1) the global system is competitive by nature 2) there remains a security dilemma 3) unbridgeable differences between states in the international system exist, Washington vs Beijing consensus etc.
AI resources tend to be geographically clustered and therefore unevenly distributed. Along a path toward AGI those with access to these resources are likely to accumulate the most power within the international system.
Working toward AGI without a pre-agreed distribution of AI resources will be a destabilising event for the global order consisting of a state-based system of governance. The only counterpoint I can think to this is along the lines of the Waltz argument that all states should have nuclear weapons because it will reduce the risk of them being used, but this defaults back to increasing risks around informational hazards and the misuse of AI.
States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.
(Small edit to clarify risk of AGI ruin)
Posted for the defunct Future Fund Worldview Prize. Crossposted from: https://temporal.substack.com/