I also think there is a genuine alternative in which power never concentrates to such an extreme degree.
IMO, a crux here is that no matter what happens, I predict extreme concentration of power as the default state if we ever make superintelligence, due to coordination bottlenecks being easily solvable for AIs (with the exception of acausal trade) combined with superhuman taste making human tacit knowledge basically irrelevant.
More generally, I expect dictatorship by AIs to be the default mode of government, because I expect the masses of people to be easily persuaded of arbitrary things long-term via stuff like BCI technology and economically irrelevant, and the robots of future society have arbitrary unified preferences (due to the easiness of coordination and trade).
In the long run, this means value alignment is necessary if humans survive under superintelligences in the new era, but unlike other people, I think the pivotal period does not need value-aligned AIs, and that instruction following can suffice as an intermediate state to solve a lot of x-risk issues, and that while stuff can be true in the limit, a lot of the relevant dynamics/​pivotal periods for how things will happen will be far from the limiting cases, so we have a lot of influence on what limiting behavior to pick.
IMO, a crux here is that no matter what happens, I predict extreme concentration of power as the default state if we ever make superintelligence, due to coordination bottlenecks being easily solvable for AIs (with the exception of acausal trade) combined with superhuman taste making human tacit knowledge basically irrelevant.
More generally, I expect dictatorship by AIs to be the default mode of government, because I expect the masses of people to be easily persuaded of arbitrary things long-term via stuff like BCI technology and economically irrelevant, and the robots of future society have arbitrary unified preferences (due to the easiness of coordination and trade).
In the long run, this means value alignment is necessary if humans survive under superintelligences in the new era, but unlike other people, I think the pivotal period does not need value-aligned AIs, and that instruction following can suffice as an intermediate state to solve a lot of x-risk issues, and that while stuff can be true in the limit, a lot of the relevant dynamics/​pivotal periods for how things will happen will be far from the limiting cases, so we have a lot of influence on what limiting behavior to pick.