Equality: Benefits are distributed fairly and broadly.[2]
This sounds, at best like a consequence of the fact that human utility functions are sub linear in resources.
I’m not sure that I agree that that is the best justification for this, although I do agree that it is an important one. Other reasons I think this is important include:
Many AI researchers/organizations have endorsed the Asilomar Principles, which include desiderata like “AI technologies should benefit and empower as many people as possible.” To gain trust, such organizations should plan to follow through on such statements unless they discover and announce compelling reasons not to.
People place psychological value on equality.
For global stability reasons, and to reduce the risk of adverse governmental action, giving some benefits to rich people/countries seems prudent notwithstanding the fact that you can “purchase” QALYs more cheaply elsewhere.
See the 5th post, where I talk about possibly delegating to governments, which would have a similar (or even stronger) such effect.
I think this illuminates two possible cruxes that could explain any disagreement here:
One’s level of comfort with having some AI Benefactor implement QALY maximization instead of a less controversial program of Benefits
Whether and how strategic considerations should be addressed via Benefits planning
On (1), while on an object-level I like QALY maximization, having a very large and powerful AI Benefactor unilaterally implement that as the global order seems suboptimal to me.
On (2), I generally think strategic considerations should be addressed elsewhere for classic gains from specialization reasons, but thinking about how certain Benefits plans will be perceived and received globally, including by powerful actors, is an important aspect of legitimacy that can’t be fully segregated.
I appreciate the comments here and elsewhere :-)
I’m not sure that I agree that that is the best justification for this, although I do agree that it is an important one. Other reasons I think this is important include:
Many AI researchers/organizations have endorsed the Asilomar Principles, which include desiderata like “AI technologies should benefit and empower as many people as possible.” To gain trust, such organizations should plan to follow through on such statements unless they discover and announce compelling reasons not to.
People place psychological value on equality.
For global stability reasons, and to reduce the risk of adverse governmental action, giving some benefits to rich people/countries seems prudent notwithstanding the fact that you can “purchase” QALYs more cheaply elsewhere.
If you really want to reduce the risk of adverse government reaction, don’t target the most needy and vulnerable, target the swing voter.
See the 5th post, where I talk about possibly delegating to governments, which would have a similar (or even stronger) such effect.
I think this illuminates two possible cruxes that could explain any disagreement here:
One’s level of comfort with having some AI Benefactor implement QALY maximization instead of a less controversial program of Benefits
Whether and how strategic considerations should be addressed via Benefits planning
On (1), while on an object-level I like QALY maximization, having a very large and powerful AI Benefactor unilaterally implement that as the global order seems suboptimal to me. On (2), I generally think strategic considerations should be addressed elsewhere for classic gains from specialization reasons, but thinking about how certain Benefits plans will be perceived and received globally, including by powerful actors, is an important aspect of legitimacy that can’t be fully segregated.