In a singleton situation, those with power are more able to do what they wish and make the world how they think it should be, in a multipolar situation there is less room for maneuver and more incentive to put all resources toward outcompeting others to ensure survival or dominance: leaving far less potential surplus value for other humans.
We have lived with a multi-polar world for a long time and have puts tonnes of resources into things that are not survival or dominance. What do you think will change?
To me it seems that drawing all power to one entity robs the rest of autonomy and self-determination. There have been arguments that this autonomy is important for happiness. So if I had huge amounts of power I would seek to find a way to distribute power to give value to other humans. This means we are in a bit of a double bind. Either we stay in a multi-polar world where co-ordination is hard but people have autonomy, or we aim for a singleton world where lots of people might feel miserable and powerless, but well fed and safe.
We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won’t be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.
It’s also true that states aren’t unified rational actors, so this sort of analysis is more of a course grained description of what happens over time: in the long run, the most competitive systems win, but in the short run smaller coalition dynamics might prevent larger states from exploiting their position of advantage to the maximal degree.
As for happiness, autonomy doesn’t require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).
We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won’t be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage.
Then you have an alignment problem. AIs should be making decisions consistent with human values. If AIs are making the world a worse place just by their existence, then SOMETHING HAS GONE VERY WRONG.
in the long run, the most competitive systems win,
That is a truism. In evolutionary history, competitive does not mean necessarily mean the biggest or even smartest though.
As for happiness, autonomy doesn’t require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).
I may be wrong, but I expect a system that needs to maintain power without an external threat to be a lot more unforgiving on autonomy. It seems that every action that might lead to an increased chance of the sovreign losing DSA would have to be forbidden and cracked down upon. With a multipolar situation you don’t rock the boat too much because your country, that you like, might lose to another.
Also with a sovreign I see no chance of fixing any abuse. In a multi-polar situation (especially if we go with merge with AI route), future people can choose to support less abusive power structures.
We have lived with a multi-polar world for a long time and have puts tonnes of resources into things that are not survival or dominance. What do you think will change?
To me it seems that drawing all power to one entity robs the rest of autonomy and self-determination. There have been arguments that this autonomy is important for happiness. So if I had huge amounts of power I would seek to find a way to distribute power to give value to other humans. This means we are in a bit of a double bind. Either we stay in a multi-polar world where co-ordination is hard but people have autonomy, or we aim for a singleton world where lots of people might feel miserable and powerless, but well fed and safe.
We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won’t be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.
It’s also true that states aren’t unified rational actors, so this sort of analysis is more of a course grained description of what happens over time: in the long run, the most competitive systems win, but in the short run smaller coalition dynamics might prevent larger states from exploiting their position of advantage to the maximal degree.
As for happiness, autonomy doesn’t require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).
Then you have an alignment problem. AIs should be making decisions consistent with human values. If AIs are making the world a worse place just by their existence, then SOMETHING HAS GONE VERY WRONG.
That is a truism. In evolutionary history, competitive does not mean necessarily mean the biggest or even smartest though.
I may be wrong, but I expect a system that needs to maintain power without an external threat to be a lot more unforgiving on autonomy. It seems that every action that might lead to an increased chance of the sovreign losing DSA would have to be forbidden and cracked down upon. With a multipolar situation you don’t rock the boat too much because your country, that you like, might lose to another.
Also with a sovreign I see no chance of fixing any abuse. In a multi-polar situation (especially if we go with merge with AI route), future people can choose to support less abusive power structures.