My model of the economy after AGI is not currently explosive. There is some acceleration and risk but the level of existential risk does not have a huge multiplier. I thought it interesting to map out my current thoughts, based on these assumptions. I’m exploring the scenario where ai-alignment looks possible, but might still lead to bad outcomes.
Threats
Threats are normally caused by asymmetry. Attackers can more easily upgrade their attack infrastructure as it is concentrated, defensive infrastructure can take longer to upgrade as it is spread out geographically and over many different security levels. The current mishmash of humans and computers is moderately insecure as it is. If bad actors gain an advantage in AGI, this insecurity can lead to a number of possible threats.
National Security Threats
If rogue states or terrorists gain an advantage they can potentially exploit the current technology and human make up of the military services. One example of something that may be possible with an AGI asymmetry is sending fake orders to military units. This could have devastating impacts.
Such things will be avoided if at all possible by national security agencies.
Political Threats
Manipulating the populace to elect bad leaders or support bad policies (such as excessive military reduction) can have a variety of political implications.
Takeover from national security
Another possible threat is national security gaining the lead in AGI and subverting their own country with it. This kind of silent coup would be very hard to detect, as the logic of the situation suggests that the national security agencies should do very similar actions whether, malign or benevolent, until AGI is created. The more covert and less oversight an operation has the greater the potential for national security based project to go rogue.
Likely response from security agencies
If the security agencies think that they are within this kind of world they will likely do what they can to make sure they come out ahead in the asymmetry so that they can increase the defensive capabilities ahead of the offensive ramp up.
This might involve trying to slow other people down or avoiding too much public knowledge about AGI to maintain their edge.
You would also hope that they would be investing massively in any defensive non-agi technologies and spreading them around through out the world. If malicious actors can exploit India’s or Pakistan’s military or political institutions it would still be disasterous.
Musings about the AGI strategic landscape
My model of the economy after AGI is not currently explosive. There is some acceleration and risk but the level of existential risk does not have a huge multiplier. I thought it interesting to map out my current thoughts, based on these assumptions. I’m exploring the scenario where ai-alignment looks possible, but might still lead to bad outcomes.
Threats
Threats are normally caused by asymmetry. Attackers can more easily upgrade their attack infrastructure as it is concentrated, defensive infrastructure can take longer to upgrade as it is spread out geographically and over many different security levels. The current mishmash of humans and computers is moderately insecure as it is. If bad actors gain an advantage in AGI, this insecurity can lead to a number of possible threats.
National Security Threats
If rogue states or terrorists gain an advantage they can potentially exploit the current technology and human make up of the military services. One example of something that may be possible with an AGI asymmetry is sending fake orders to military units. This could have devastating impacts.
Such things will be avoided if at all possible by national security agencies.
Political Threats
Manipulating the populace to elect bad leaders or support bad policies (such as excessive military reduction) can have a variety of political implications.
Takeover from national security
Another possible threat is national security gaining the lead in AGI and subverting their own country with it. This kind of silent coup would be very hard to detect, as the logic of the situation suggests that the national security agencies should do very similar actions whether, malign or benevolent, until AGI is created. The more covert and less oversight an operation has the greater the potential for national security based project to go rogue.
Likely response from security agencies
If the security agencies think that they are within this kind of world they will likely do what they can to make sure they come out ahead in the asymmetry so that they can increase the defensive capabilities ahead of the offensive ramp up.
This might involve trying to slow other people down or avoiding too much public knowledge about AGI to maintain their edge.
You would also hope that they would be investing massively in any defensive non-agi technologies and spreading them around through out the world. If malicious actors can exploit India’s or Pakistan’s military or political institutions it would still be disasterous.