Equality: Benefits are distributed fairly and broadly.[2]
This sounds, at best like a consequence of the fact that human utility functions are sub linear in resources.
Democratization: Where possible, AI Benefits decisionmakers should create, consult with, or defer to democratic governance mechanisms.
Are we talking about decision making in a pre or post superhuman AI setting? In a pre ASI setting, it is reasonable for the people building AI systems to defer somewhat to democratic governance mechanisms, where their demands are well considered and sensible. (At least some democratic leaders may be sufficiently lacking in technical understanding of AI for their requests to be impossible, nonsensical or dangerous.)
In a post ASI setting, you have an AI capable of tracking every neuron firing in every human brain. It knows exactly what everyone wants. Any decisions made by democratic processes will be purely entropic compared to the AI. Just because democracy is better than dictatorship, monarchy ect doesn’t mean we can attach positive affect to democracy and keep democracy around in the face of far better systems like benevolent super-intelligence running everything.
Modesty: AI benefactors should be epistemically modest, meaning that they should be very careful when predicting how plans will change or interact with complex systems (e.g., the world economy).
Again, pre ASI, this is sensible. I would expect an ASI to be very well calibrated. It will not need to be hard coded with modesty, it can work out how modest to be by its self.
Democratization: Where possible, AI Benefits decisionmakers should create, consult with, or defer to democratic governance mechanisms.
Are we talking about decision making in a pre or post superhuman AI setting? In a pre ASI setting, it is reasonable for the people building AI systems to defer somewhat to democratic governance mechanisms, where their demands are well considered and sensible. (At least some democratic leaders may be sufficiently lacking in technical understanding of AI for their requests to be impossible, nonsensical or dangerous.)
In a post ASI setting, you have an AI capable of tracking every neuron firing in every human brain. It knows exactly what everyone wants. Any decisions made by democratic processes will be purely entropic compared to the AI. Just because democracy is better than dictatorship, monarchy ect doesn’t mean we can attach positive affect to democracy and keep democracy around in the face of far better systems like benevolent super-intelligence running everything.
Modesty: AI benefactors should be epistemically modest, meaning that they should be very careful when predicting how plans will change or interact with complex systems (e.g., the world economy).
Again, pre ASI, this is sensible. I would expect an ASI to be very well calibrated. It will not need to be hard coded with modesty, it can work out how modest to be by its self.
Thanks for asking for clarification on these. Yes, this is in general concerning pre-AGI systems.
[Medium confidence]: I generally agree that democracy has substantive value, not procedural value. However, I think there are very good reasons to have a skeptical prior towards any nondemocratic post-ASI order.
Equality: Benefits are distributed fairly and broadly.[2]
This sounds, at best like a consequence of the fact that human utility functions are sub linear in resources.
I’m not sure that I agree that that is the best justification for this, although I do agree that it is an important one. Other reasons I think this is important include:
Many AI researchers/organizations have endorsed the Asilomar Principles, which include desiderata like “AI technologies should benefit and empower as many people as possible.” To gain trust, such organizations should plan to follow through on such statements unless they discover and announce compelling reasons not to.
People place psychological value on equality.
For global stability reasons, and to reduce the risk of adverse governmental action, giving some benefits to rich people/countries seems prudent notwithstanding the fact that you can “purchase” QALYs more cheaply elsewhere.
See the 5th post, where I talk about possibly delegating to governments, which would have a similar (or even stronger) such effect.
I think this illuminates two possible cruxes that could explain any disagreement here:
One’s level of comfort with having some AI Benefactor implement QALY maximization instead of a less controversial program of Benefits
Whether and how strategic considerations should be addressed via Benefits planning
On (1), while on an object-level I like QALY maximization, having a very large and powerful AI Benefactor unilaterally implement that as the global order seems suboptimal to me.
On (2), I generally think strategic considerations should be addressed elsewhere for classic gains from specialization reasons, but thinking about how certain Benefits plans will be perceived and received globally, including by powerful actors, is an important aspect of legitimacy that can’t be fully segregated.
This sounds, at best like a consequence of the fact that human utility functions are sub linear in resources.
Are we talking about decision making in a pre or post superhuman AI setting? In a pre ASI setting, it is reasonable for the people building AI systems to defer somewhat to democratic governance mechanisms, where their demands are well considered and sensible. (At least some democratic leaders may be sufficiently lacking in technical understanding of AI for their requests to be impossible, nonsensical or dangerous.)
In a post ASI setting, you have an AI capable of tracking every neuron firing in every human brain. It knows exactly what everyone wants. Any decisions made by democratic processes will be purely entropic compared to the AI. Just because democracy is better than dictatorship, monarchy ect doesn’t mean we can attach positive affect to democracy and keep democracy around in the face of far better systems like benevolent super-intelligence running everything.
Again, pre ASI, this is sensible. I would expect an ASI to be very well calibrated. It will not need to be hard coded with modesty, it can work out how modest to be by its self.
Thanks for asking for clarification on these. Yes, this is in general concerning pre-AGI systems.
[Medium confidence]: I generally agree that democracy has substantive value, not procedural value. However, I think there are very good reasons to have a skeptical prior towards any nondemocratic post-ASI order.
[Lower confidence]: I therefore suspect it’s desirable to a nontrivial period of time during which AGI will exist but humans will still retain governance authority over it. My view may vary depending on what we know about the AGI and its alignment/safety.
I appreciate the comments here and elsewhere :-)
I’m not sure that I agree that that is the best justification for this, although I do agree that it is an important one. Other reasons I think this is important include:
Many AI researchers/organizations have endorsed the Asilomar Principles, which include desiderata like “AI technologies should benefit and empower as many people as possible.” To gain trust, such organizations should plan to follow through on such statements unless they discover and announce compelling reasons not to.
People place psychological value on equality.
For global stability reasons, and to reduce the risk of adverse governmental action, giving some benefits to rich people/countries seems prudent notwithstanding the fact that you can “purchase” QALYs more cheaply elsewhere.
If you really want to reduce the risk of adverse government reaction, don’t target the most needy and vulnerable, target the swing voter.
See the 5th post, where I talk about possibly delegating to governments, which would have a similar (or even stronger) such effect.
I think this illuminates two possible cruxes that could explain any disagreement here:
One’s level of comfort with having some AI Benefactor implement QALY maximization instead of a less controversial program of Benefits
Whether and how strategic considerations should be addressed via Benefits planning
On (1), while on an object-level I like QALY maximization, having a very large and powerful AI Benefactor unilaterally implement that as the global order seems suboptimal to me. On (2), I generally think strategic considerations should be addressed elsewhere for classic gains from specialization reasons, but thinking about how certain Benefits plans will be perceived and received globally, including by powerful actors, is an important aspect of legitimacy that can’t be fully segregated.