Democratization: Where possible, AI Benefits decisionmakers should create, consult with, or defer to democratic governance mechanisms.
Are we talking about decision making in a pre or post superhuman AI setting? In a pre ASI setting, it is reasonable for the people building AI systems to defer somewhat to democratic governance mechanisms, where their demands are well considered and sensible. (At least some democratic leaders may be sufficiently lacking in technical understanding of AI for their requests to be impossible, nonsensical or dangerous.)
In a post ASI setting, you have an AI capable of tracking every neuron firing in every human brain. It knows exactly what everyone wants. Any decisions made by democratic processes will be purely entropic compared to the AI. Just because democracy is better than dictatorship, monarchy ect doesn’t mean we can attach positive affect to democracy and keep democracy around in the face of far better systems like benevolent super-intelligence running everything.
Modesty: AI benefactors should be epistemically modest, meaning that they should be very careful when predicting how plans will change or interact with complex systems (e.g., the world economy).
Again, pre ASI, this is sensible. I would expect an ASI to be very well calibrated. It will not need to be hard coded with modesty, it can work out how modest to be by its self.
Thanks for asking for clarification on these. Yes, this is in general concerning pre-AGI systems.
[Medium confidence]: I generally agree that democracy has substantive value, not procedural value. However, I think there are very good reasons to have a skeptical prior towards any nondemocratic post-ASI order.
Thanks for asking for clarification on these. Yes, this is in general concerning pre-AGI systems.
[Medium confidence]: I generally agree that democracy has substantive value, not procedural value. However, I think there are very good reasons to have a skeptical prior towards any nondemocratic post-ASI order.
[Lower confidence]: I therefore suspect it’s desirable to a nontrivial period of time during which AGI will exist but humans will still retain governance authority over it. My view may vary depending on what we know about the AGI and its alignment/safety.