We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won’t be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.
It’s also true that states aren’t unified rational actors, so this sort of analysis is more of a course grained description of what happens over time: in the long run, the most competitive systems win, but in the short run smaller coalition dynamics might prevent larger states from exploiting their position of advantage to the maximal degree.
As for happiness, autonomy doesn’t require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).
We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won’t be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage.
Then you have an alignment problem. AIs should be making decisions consistent with human values. If AIs are making the world a worse place just by their existence, then SOMETHING HAS GONE VERY WRONG.
in the long run, the most competitive systems win,
That is a truism. In evolutionary history, competitive does not mean necessarily mean the biggest or even smartest though.
As for happiness, autonomy doesn’t require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).
I may be wrong, but I expect a system that needs to maintain power without an external threat to be a lot more unforgiving on autonomy. It seems that every action that might lead to an increased chance of the sovreign losing DSA would have to be forbidden and cracked down upon. With a multipolar situation you don’t rock the boat too much because your country, that you like, might lose to another.
Also with a sovreign I see no chance of fixing any abuse. In a multi-polar situation (especially if we go with merge with AI route), future people can choose to support less abusive power structures.
We have lived in a multi-polar world where human alignment is a critical key to power: therefore in the most competitive systems, some humans got a lot of what they wanted. In the future with better AI, humans won’t be doing as much of the problem solving, so keeping humans happy, motivating them, etc. might become less relevant to keeping strategic advantage. Why Nations Fail has a lot of examples on this sort of idea.
It’s also true that states aren’t unified rational actors, so this sort of analysis is more of a course grained description of what happens over time: in the long run, the most competitive systems win, but in the short run smaller coalition dynamics might prevent larger states from exploiting their position of advantage to the maximal degree.
As for happiness, autonomy doesn’t require having all options, just some options. The US is simultaneously very strong, while also having lots of autonomy for its citizens. The US was less likely to respect the autonomy of other countries during the cold war when it percieved existential risks from Communism: centralized power can be compatible with increased autonomy, but you want the centralized power to be in a system which is less likely to abuse power (though all systems abuse power to some degree).
Then you have an alignment problem. AIs should be making decisions consistent with human values. If AIs are making the world a worse place just by their existence, then SOMETHING HAS GONE VERY WRONG.
That is a truism. In evolutionary history, competitive does not mean necessarily mean the biggest or even smartest though.
I may be wrong, but I expect a system that needs to maintain power without an external threat to be a lot more unforgiving on autonomy. It seems that every action that might lead to an increased chance of the sovreign losing DSA would have to be forbidden and cracked down upon. With a multipolar situation you don’t rock the boat too much because your country, that you like, might lose to another.
Also with a sovreign I see no chance of fixing any abuse. In a multi-polar situation (especially if we go with merge with AI route), future people can choose to support less abusive power structures.