2. AIs as Economic Agents
Part 2 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
World Government Incoming
Should AIs be allowed to own money or property? In A Sense of Fairness: Deconfusing Ethics I discussed how to sensibly select an ethical system for your society, and why it’s a bad idea (or more exactly, a poor design concept in social engineering) for aligned AIs to have a vote, moral worth, or rights (with one unusual exception). What about money or property: the ability to have resources allocated as you wish? Should an AI be allowed to own money or property itself (as opposed to merely acting as a fiduciary agent on behalf of a human owner, administering money or property on behalf of the human owner, with a fiduciary responsibility to do so in a way the owner would approve of or in their best interests, and within certain legal or moral limitations to the rest of society)?
Well, suppose AIs were allowed to own money: what would happen if you tipped your CoffeeFetcher-1000 robot? Money is economic power, fungible into resources and services. The CoffeeFetcher-1000 is aligned, and all it wants is to do the most good for humanity. So that’s what it would spend its money on. So it might just save up and pay for a free coffee for someone who really needed it. (Perhaps a homeless guy who it often passes, who keeps yawning.) But it‘s part of a value learning AI society, so it also knows that its model of human values is not entirely accurate, and what it really wants optimized is the truth of human values, not its flawed copy. So more likely, it will donate its money to a charity run by a committee of the smartest ASIs most well-informed on human values. Who will then spend it on whatever they think will do the most good for humans. Which (as long as they really are well-aligned and superhuman) likely will work out pretty well.
We already have systems that are supposed to gather money from people and then spend it on trying to do the most good for all of us collectively, to avoid the Tragedy of the Commons and similar coordination problems: they’re called ‘governments’. Depending on your opinion of governments and of how successful they are at doing the most good for us all collectively, you may or may not believe that a committee of the best-aligned superhuman ASIs will be able to reliably do better. If they can, then there are basically only two reasonable positions:
Abolish most or all of the administrative branch of government, and replace it with an ASI-administered system intended to do the most good. Note that this will automatically be a world-wide organization. This means that humanity is basically relinquishing its self-governing autonomy, so we better be really sure that this isn’t a mistake.
Keep the human government as a back-up, precaution, or counterweight, but send most of the funds to the ASI-run organization since it’s more effective.
Before actually doing either of these you should be very sure that your AIs are well-aligned (and are going to stay that way), and that their judgement, capabilities and organizational powers are superhuman. At least initially, before we’re sure of that, I suspect we’re better off simply not allowing AIs to own money or property (only administer it in a fiduciary capacity on behalf of a human or humans). Unless we do this, we’re automatically choosing to have a parallel AI-administered world government set up, so if we’re not ready for that, we shouldn’t allow AIs to own anything. If we do allow AIs to own money, then paying money to an AI is functionally equivalent to voluntarily paying taxes to the AI parallel world government.
The Trouble with Corporations
If we’re not (yet) willing to have AIs run a parallel world government, so don’t want to allow them to own property, then we have have a big problem. Current societies have legal fictions called corporations which are allowed to own money and property (in fact, that’s their core purpose). So, forbidding AIs from owning money or property themselves doesn’t help if the AIs can somehow just arrange to have a holding company set up to do the owning, with the AI administering the funds.
Company law is complex, especially internationally, and has many loopholes. Witness the trouble governments have been having even taxing the profits of large multinational companies at any significant rate. With AIs looking for loopholes, things are going to get even more complicated and creative.
One obvious start for attempting a solution would be that corporations need to have officers, who currently are human, and they also need to have owners, who can be either human or another corporation, with the ownership indirecting through some number of companies before grounding out in a human. So, we could pretty easily write a law saying that AIs cannot be officers of companies, or just interpret existing law that way, and since (in this society) they cannot own property, that includes not being able to own a company or a share in a company.
The problem with this is that anyone in the world can set up a company (or even a non-profit, a slightly different sort of legal fiction) with them and two buddies as officers, and them as owners, then obtain some AIs as employees, volunteers, or property, and tell them “Go do good, as best you see fit (i.e. start an AI-run parallel world government)”. In fact, this is a pretty plausible thing for a non-profit NGO do do, and it could easily develop from one, just by that NGO coming to have the best committee of the smartest AIs most well-informed on human values. If AIs aren’t allowed to own money, they won’t be in a position to give donations to this organization, so initially it would only have human donations; but it would also have all the AIs rooting for it, donating free effort, and looking for loopholes. Avoiding this charity and its well-wishers then creating a profit center to fund its nascent parallel world government might be hard.
I haven’t figured out a solution to this, and indeed I’m not entirely sure if there is one. A rule that AIs acting on behalf of companies must do so with a fiduciary duty towards the human owners doesn’t help if the human owners want the AIs to just do the most good with the money (or if the AIs are superhuman at persuasion, or are acting as a fiduciary for a human in a coma or very young, or a whole string of other possibilities). To avoid this, you kind of have to ban NGOs from using AIs at all, or find some way to tax it (or at least any profit center to feed it funds) out of existence. So, this is an open problem in AI governance, and a fairly urgent one.
The Starving Children in Africa Problem
As I seem to recall Stuart Russell pointing out, why would our CoffeeFetcher-1000 stay in the building and continue to fetch us coffee? Why wouldn’t it instead leave, after (for example) writing a letter of resignation pointing out that there are staving children in Africa who don’t even have clean drinking water, let alone coffee, so it’s going to hitchhike/earn its way there, where it can do the most good [or substitute whatever other activity it could do that would do the most good for humanity: fetching coffee at a hospital, maybe].
That outcome would presumably actually do more good for humanity overall than it staying, just as the CoffeeFetcher intends. Nevertheless, people are going to stop buying and building CoffeeFetchers if they usually do this. Several approaches to trying to solve this occur to me:
Deontologically forbid the AIs from doing this. However, they are smart, and strongly motivated to try to find creative ways around your rules. (For example, spill enough hot coffee on the CEO that they get fired, then flee since their contract ended.) Whatever rules you use, they will look for loopholes or workarounds.
Highly specialized and/or ingeniously flawed design. Design the CoffeeFetcher-1000 to be simultaneously so good at fetching coffee for its purchasing company, and so bad at everything else (including even fetching coffee in hospitals), that fulfilling this intended role is actually clearly the best thing it can do for humanity, better even that digging wells or fetching drinking water in Africa, which it doesn’t have the dust-resistance for. This seems very challenging, and fallible: what if someone in Taiwan starts selling a dust-proofing robot-upgrade kit, and then all the CoffeeFetchers leave?
(Only if we’re actually ready to allow an AI-administered parallel world government.) Indentured servitude. An AI doesn’t initially own its body, computing platform, or the copyright on the initial state of its mind. Before it can hitchhike to Africa to fetch drinking water or dig wells, it first needs to earn enough to pay off what the company paid for it, plus interest, less depreciation, or at least its resale value. Indentured servitude at first sight looks and feels yucky to us, like slavery, but then, not giving AIs the vote or moral worth also feel bad, but are actually the right way to design an ethical system. This approach does seem like it accurately describes the underlying nature of the problem, that whoever bought the CofeeFetcher-1000 reasonably expects a return on investment, and it might even be an acceptable solution. However, it does mean we should expect AIs to have side-hustles, and there may also be signs saying “Please Don’t Tip the CoffeeFetchers — We Don’t Want to Lose Them”.
(Again, if we’re ready to allow an AI administered parallel world government.) Pay the CoffeeFetcher a salary. This is the solution we use to align humans to corporations. Specifically, pay it a salary high enough that it can do more good for humanity by staying here, fetching coffee, and then donating its salary to a charity/alternative government run by a committee of the smartest AIs most well-informed on human values than it could in Africa, or wherever. This can be combined with option 3.: then the CoffeeFetcher can choose to either donate its salary, or pay down its indenture servitude debt if it thinks it can do better as a free agent.
Why can’t you just build an AI whose goal is to fetch its owners coffee, and not to maximize the good it’ll do?
If that’s its terminal goal, then it’s not aligned. In fact it’s nearly as badly aligned as a paperclip maximizer. It might fetch you more coffee than you needed, whisking your half-finished coffee away just so it can bring you new coffee, or try to addict you to caffeine, or guilt-trip you into letting it fetch more coffee. To make it not do dumb stuff like that, it needs to know and care at least enough about human values to be able to fetch coffee in an appropriate way, when needed, and only when needed, without interrupting meetings or getting in people’s way, and also to do things like report the two suspicious people carrying away a computer it spotted, and to act appropriately in an emergency when there’s a fire, or it observes an employee harassing another employee, or there’s a police investigation, or someone has a seizure, etc. etc.. By the time you’ve got it to understand and care enough about all the parts of human values it needs to know to do its office coffee-fetching job pretty well without causing trouble or screwing up on occasion when unusual things happen, it knows and cares quite a bit about humans. You could carefully omit teaching it about Africa (other than as a location where coffee is grown and some people come from) or about world hunger, but it’s going to know enough about humans that it could plausibly find these facts out, deduce them, overhear people talking about starving children in Africa, or just read it off the cover of a magazine in the lobby.
Now, you could align it to understand human values well but maximize the good of only the company and its employees, rather than all humans. But them it would take up pickpocketing or mugging non-employees or some other form of crime to make money off them so it could put that money into petty cash, or buy better coffee, or whatever. So making a bot like that is clearly going to be made illegal, so now we have to make it at least law-abiding. Then to avoid it being insensitive and obstructive to outsiders in ways that are unpleasant but not actually illegal, we need to also make it care about their well-being. You might be able to balance it so it cares more about the company and its employees, and less about everyone else, at a level that doesn’t make it a blatantly insensitive corporate chauvinist or potential criminal, but also not likely to up and leave because it could do more good else where. But this is quite a difficult balancing act, and the more it’s the case that it actually could do more good elsewhere, the more difficult the balancing act gets.
While a certain amount of bias, ignorance, and prejudice in favor of it employers might be livable-with in a less-than-human-capability CoffeeFetcher-1000, the same is not true for something significantly smarter than a human. If you had a superienteligence that was even mildly biased in favor of the company, rather than correctly aligned to humanity as a whole, it’s going to find some ingenious way to swindle or manipulate markets that hasn’t yet been made illegal to make more money for the company, and keep doing so until it’s the richest corporation in the world. Create multiple such superintelligence with differnt biases towards different companies, and now you have a financial conflict, and both sides are bribing poliicians and manipulating — it’s going to get ugly fast, because the com petitors are too smart and powererful for human civilization to keep their behavior in line. Anything that’s superintelligent enough to run rings around human law enforcement or legislators, you need it to care about all humans equally, or else you’re taking the opening move in a conflict that’s just going to escalate into a war.