Cullen
FHI Report: How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents
See the 5th post, where I talk about possibly delegating to governments, which would have a similar (or even stronger) such effect.
I think this illuminates two possible cruxes that could explain any disagreement here:
One’s level of comfort with having some AI Benefactor implement QALY maximization instead of a less controversial program of Benefits
Whether and how strategic considerations should be addressed via Benefits planning
On (1), while on an object-level I like QALY maximization, having a very large and powerful AI Benefactor unilaterally implement that as the global order seems suboptimal to me. On (2), I generally think strategic considerations should be addressed elsewhere for classic gains from specialization reasons, but thinking about how certain Benefits plans will be perceived and received globally, including by powerful actors, is an important aspect of legitimacy that can’t be fully segregated.
AI Benefits Post 5: Outstanding Questions on Governing Benefits
Parallels Between AI Safety by Debate and Evidence Law
At some point, some particular group of humans code the AI and press run. If all the people who coded it were totally evil, they will make an AI that does evil things.
The only place any kind of morality can affect the AI’s decisions is if the programmers are somewhat moral.
(Note that I think any disagreement we may have here dissolves upon the clarification that I also—or maybe primarily for the purposes of this series—care about non-AGI but very profitable AI systems)
Why would humans be making these decisions? Why are we assuming that the AI can design vaccines, but not do this sort of reasoning to select how to benefit people by itself?
I don’t think it’s very hard to imagine AI of the sort that is able to superhumanly design vaccines but not govern economies.
I would avoid giving heuristics like that much weight. I would say to do QALY calculations, at least to the order of magnitude. The QALY between different possible projects can differ by orders of magnitude. Which projects are on the table depends on how good the tech is and what’s already been done. This is an optimisation that we can better make when you have the list of proposed beneficial AI projects in hand.
As I explained in a previous comment (referencing here for other readers), there are some procedural reasons I don’t want to do pure EV maximization at the object level once the “pot” of benefits grows big enough to attract certain types of attention.
I agree that that is true for AGI systems.
Democratization: Where possible, AI Benefits decisionmakers should create, consult with, or defer to democratic governance mechanisms.
Are we talking about decision making in a pre or post superhuman AI setting? In a pre ASI setting, it is reasonable for the people building AI systems to defer somewhat to democratic governance mechanisms, where their demands are well considered and sensible. (At least some democratic leaders may be sufficiently lacking in technical understanding of AI for their requests to be impossible, nonsensical or dangerous.)
In a post ASI setting, you have an AI capable of tracking every neuron firing in every human brain. It knows exactly what everyone wants. Any decisions made by democratic processes will be purely entropic compared to the AI. Just because democracy is better than dictatorship, monarchy ect doesn’t mean we can attach positive affect to democracy and keep democracy around in the face of far better systems like benevolent super-intelligence running everything.
Modesty: AI benefactors should be epistemically modest, meaning that they should be very careful when predicting how plans will change or interact with complex systems (e.g., the world economy). Again, pre ASI, this is sensible. I would expect an ASI to be very well calibrated. It will not need to be hard coded with modesty, it can work out how modest to be by its self.
Thanks for asking for clarification on these. Yes, this is in general concerning pre-AGI systems.
[Medium confidence]: I generally agree that democracy has substantive value, not procedural value. However, I think there are very good reasons to have a skeptical prior towards any nondemocratic post-ASI order.
[Lower confidence]: I therefore suspect it’s desirable to a nontrivial period of time during which AGI will exist but humans will still retain governance authority over it. My view may vary depending on what we know about the AGI and its alignment/safety.
I appreciate the comments here and elsewhere :-)
Equality: Benefits are distributed fairly and broadly.[2]
This sounds, at best like a consequence of the fact that human utility functions are sub linear in resources.
I’m not sure that I agree that that is the best justification for this, although I do agree that it is an important one. Other reasons I think this is important include:
Many AI researchers/organizations have endorsed the Asilomar Principles, which include desiderata like “AI technologies should benefit and empower as many people as possible.” To gain trust, such organizations should plan to follow through on such statements unless they discover and announce compelling reasons not to.
People place psychological value on equality.
For global stability reasons, and to reduce the risk of adverse governmental action, giving some benefits to rich people/countries seems prudent notwithstanding the fact that you can “purchase” QALYs more cheaply elsewhere.
- Jul 17, 2020, 10:43 PM; 1 point) 's comment on AI Benefits Post 4: Outstanding Questions on Selecting Benefits by (
AI Benefits Post 4: Outstanding Questions on Selecting Benefits
Antitrust-Compliant AI Industry Self-Regulation
AI Benefits Post 3: Direct and Indirect Approaches to AI Benefits
AI Benefits Post 2: How AI Benefits Differs from AI Alignment & AI for Good
Thanks! Fixed.
I had one of the EA Forum’s launch codes, but I decided to permanently delete it as an arms-reduction measure. I no longer have access to my launch code, though I admit that I cannot convincingly demonstrate this.