Multinational corporations as optimizers: a case for reaching across the aisle
One thing I’ve noticed when hanging around leftist circles is that they talk about values alignment as well. Once you get past the completely different vocabulary, they too are trying to figure out how to deal with large superhuman entities optimizing for a goal where humans are irrelevant. It’s just that what they’re talking about is global, publicly traded megacorps optimizing for money.
Publicly held companies have the terminal value of accumulating as much money as possible. This is called “shareholder value maximization”.
Even if most people inside the organization may value other things, their job description is to contribute to the ultimate goal of maximizing money, and they are paid and incentivized to do so. In this they create procedures and policies, and then tell their sub-employees to execute them.
Procedures and policies subsume the people performing them, especially on the lowest level, which means that they could be seen as programs being executed manually. Of course, humans are not silicon; running a procedure using humans as both computing substrate and world-manipulators is slow and imperfect. However, I believe the analogy still holds.
The ways in which this shareholder value maximization has already seriously damaged the world and compromised the quality of human life are myriad and easily observable: pollution, climate change, and other such externalities. Companies’ disregard for human suffering further enhances this comparison.
In conclusion I believe that the “friendly AI” problem has enough similarities with the “friendly multinational megacorporation” problem that some cross-pollination could be productive. Even if most of their ideas are implausible for use with an AI, the fact that they also have thoughts related to the creation of superhuman agents and the ethics thereof is still worth looking at.
This is also a plausible route for spreading awareness of AI safety issues to the left. The downside is that it might make AI safety a “leftest” issue if a conservative analogy is not introduced at the same time.
the problem is most folks I’ve talked to on the left with this pitch are even more skeptical of the idea that high capability intelligent software can exist. they generally seem to assume the current level is the peak and progress is stuck. solving that would make progress communicating it to them.
Do they think it’s a hardware/cost issue? Or do they think that “true” intelligence is beyond our abilities?
it’s the full range of things people say, just a higher ratio of people saying them on the left, in my experience. Also, re: making it a leftist issue—right now it’s a liberal issue, and only a liberal issue; liberal CEOs have offended both right-wingers and leftists regarding AI safety, so it’s possible that at least getting the actual left on board might be promising somehow. Not sure. Seems like this discussion should be had on less wrong itself first. I’ve certainly seen leftists worrying about ai aligned to megacorporations.
It may also be worth noting how a sufficiently advanced “algorithm” could start making its own “decisions”; for example, a search/display algorithm that has been built to maximize advertisement revenue, if given enough resources and no moral boundaries, may suppress search results that contain negative opinions on itself, promote taking down competitors, and/or preferentially display news and arguments that are in favor of allowing Algorithms more power. Skepticism about The Algorithm is a cause many political parties are already able to agree on; the possibility of The Algorithm going FOOM might accelerate public discussions about the development of AI in general.
Corporations only exist within a legal governance infrastructure that permits incorporation and shapes externalities into internalities. Without such infrastructure you have warring tribes/gangs, not corporations.
This is the naive leftist/marxist take. In practice communist countries such as Mao era China outpaced the west in pollution and environmental destruction.
Neither government bureaucracies or corporations are aligned by default—that always require careful mechanism design. As markets are the pareto efficient organization structure, they also tend to solve these problems quicker and more effectively with appropriate legal infrastructure to internalize externalities.
Skepticism about the alignment of government and the incentives thereof has existed for almost as long as governments have. Elections, for example, are a crude but better-than-nothing attempt to align political interests with public interests, and much ink has been spilled on the subject of improving this alignment and even whether alignment to the general public opinion is a good idea.
Far less such discussion has occurred in the case of extremely large companies, as they are a relatively newer concept.
A pithier version of this has been suggested to me as “[Corporations are] like paperclippers except for money”.
The problem with this analogy is that megacorps must at least pay lip service to the rule of law, and there’s no way a megacorp would survive if the government decide that they shouldn’t. Any company is ultimately made of people, and those people can be individually targeted by the legal system (or worse). What’s the equivalent for AGI?
The fact that there is not really an equivalent for AGI is admittedly a place where this analogy breaks down.