The realist in me says that tyrannical souls/tyrannical governments seem likely to be the default state of governance, because the forces that power democracy and liberty will be gone with the rise of advanced AI, so we should start planning to make the future AIs we build, and the people that control AI, and the future AIs that do control the government value aligned.
More generally, I expect value alignment to be much more of a generator of outcomes in the 21st century than most other forces with the rise of AI, and this is not just about the classical AI alignment problem, compared to people selfishly doing stuff that generates positive externalities as a side effect.
The realist in me says that tyrannical souls/tyrannical governments seem likely to be the default state of governance, because the forces that power democracy and liberty will be gone with the rise of advanced AI, so we should start planning to make the future AIs we build, and the people that control AI, and the future AIs that do control the government value aligned.
More generally, I expect value alignment to be much more of a generator of outcomes in the 21st century than most other forces with the rise of AI, and this is not just about the classical AI alignment problem, compared to people selfishly doing stuff that generates positive externalities as a side effect.