So if an arms race is good or not basically depends on if the “good guys” are going to win (and remain good guys).
Quick thought — it’s not apples and apples, but it might be worth investigating which fields hegemony works well in, and which fields checks and balances works well in:
There’s also the question with AGI of what we’re more scared of — one country or organization dominating the world, or an early pioneer in AGI doing a lot of damage by accident?
#2 scares me more than #1. You need to create exactly one resource-commandeering positive feedback loop without an off switch to destroy the world, among other things.
While it may sound counter intuitive, I think you want to increase both hegemony and balance of power at the same time. Basically a more powerful state can help solve lots of coordination problems, but to accept the risks of greater state power you want the state to be more structurally aligned with larger and larger populations of people.
Obviously states are more aligned with their own populations than with everyone, but I think the expansion of the U.S. security umbrella has been good for reducing the number of possible security dilemmas between states and accordingly people are better off than they would otherwise be with more independent military forces (higher defense spending, higher war risk, etc.). There is some degree of specialization within NATO which makes it harder for states to go to war as individuals, and also makes their contribution to the alliance more vital. The more this happens at a given resource level, the more powerful the alliance will be in absolute terms, and the more power will be internally balanced against unilateral actions that conflict with some state’s interests, though at some point veto power and reduced redundancy could undermine the strength of the alliance.
For technological risks, racing increases risk in the short-run between the competitors but will tend to reduce the number of competitors. In the long-run, agreeing not to race while other technologies progress increases the amount of low hanging fruit and expands the scope of competition to more possible competitors. If you think resource-commandeering positive feedback loops are not super close, there might be a degree of racing you would want earlier to establish front-runners to win and deter potential market entrants from expanding the competition during a period of high-risk low-hanging fruit. You might be able to do better yet if the near term leading competitors can reach agreement to not race, and then team up to defeat or buyout new entrants. The leaders obviously can’t hold everything completely still and expect to remain leaders though, and businesses should deliver measurable tech progress if they want to avoid anti-monopoly regulation.
Anyway, basically preventing races isn’t as simple as choosing not to race, and even if your goal is just to minimize risk you either have to credibly commit a larger and large number of actors to not defect over time as technology and know-how diffuses, or you should want more aligned competitors to win and to cooperate to slow the risky aspects of racing.
Apologies if this wasn’t clear from the post, the post was intended as a minor update to one I wrote several years ago, and I didn’t expect to see it get copied over to LessWrong, haha.
Quick thought — it’s not apples and apples, but it might be worth investigating which fields hegemony works well in, and which fields checks and balances works well in:
https://en.wikipedia.org/wiki/Hegemony
https://en.wikipedia.org/wiki/Separation_of_powers
There’s also the question with AGI of what we’re more scared of — one country or organization dominating the world, or an early pioneer in AGI doing a lot of damage by accident?
#2 scares me more than #1. You need to create exactly one resource-commandeering positive feedback loop without an off switch to destroy the world, among other things.
While it may sound counter intuitive, I think you want to increase both hegemony and balance of power at the same time. Basically a more powerful state can help solve lots of coordination problems, but to accept the risks of greater state power you want the state to be more structurally aligned with larger and larger populations of people.
https://www.amazon.com/Narrow-Corridor-States-Societies-Liberty-ebook/dp/B07MCRLV2K
Obviously states are more aligned with their own populations than with everyone, but I think the expansion of the U.S. security umbrella has been good for reducing the number of possible security dilemmas between states and accordingly people are better off than they would otherwise be with more independent military forces (higher defense spending, higher war risk, etc.). There is some degree of specialization within NATO which makes it harder for states to go to war as individuals, and also makes their contribution to the alliance more vital. The more this happens at a given resource level, the more powerful the alliance will be in absolute terms, and the more power will be internally balanced against unilateral actions that conflict with some state’s interests, though at some point veto power and reduced redundancy could undermine the strength of the alliance.
For technological risks, racing increases risk in the short-run between the competitors but will tend to reduce the number of competitors. In the long-run, agreeing not to race while other technologies progress increases the amount of low hanging fruit and expands the scope of competition to more possible competitors. If you think resource-commandeering positive feedback loops are not super close, there might be a degree of racing you would want earlier to establish front-runners to win and deter potential market entrants from expanding the competition during a period of high-risk low-hanging fruit. You might be able to do better yet if the near term leading competitors can reach agreement to not race, and then team up to defeat or buyout new entrants. The leaders obviously can’t hold everything completely still and expect to remain leaders though, and businesses should deliver measurable tech progress if they want to avoid anti-monopoly regulation.
Anyway, basically preventing races isn’t as simple as choosing not to race, and even if your goal is just to minimize risk you either have to credibly commit a larger and large number of actors to not defect over time as technology and know-how diffuses, or you should want more aligned competitors to win and to cooperate to slow the risky aspects of racing.
Apologies if this wasn’t clear from the post, the post was intended as a minor update to one I wrote several years ago, and I didn’t expect to see it get copied over to LessWrong, haha.