One thing I’ve noticed when hanging around leftist circles is that they talk about values alignment as well. Once you get past the completely different vocabulary, they too are trying to figure out how to deal with large superhuman entities optimizing for a goal where humans are irrelevant. It’s just that what they’re talking about is global, publicly traded megacorps optimizing for money.
Publicly held companies have the terminal value of accumulating as much money as possible. This is called “shareholder value maximization”.
Even if most people inside the organization may value other things, their job description is to contribute to the ultimate goal of maximizing money, and they are paid and incentivized to do so. In this they create procedures and policies, and then tell their sub-employees to execute them.
Procedures and policies subsume the people performing them, especially on the lowest level, which means that they could be seen as programs being executed manually. Of course, humans are not silicon; running a procedure using humans as both computing substrate and world-manipulators is slow and imperfect. However, I believe the analogy still holds.
The ways in which this shareholder value maximization has already seriously damaged the world and compromised the quality of human life are myriad and easily observable: pollution, climate change, and other such externalities. Companies’ disregard for human suffering further enhances this comparison.
In conclusion I believe that the “friendly AI” problem has enough similarities with the “friendly multinational megacorporation” problem that some cross-pollination could be productive. Even if most of their ideas are implausible for use with an AI, the fact that they also have thoughts related to the creation of superhuman agents and the ethics thereof is still worth looking at.
Multinational corporations as optimizers: a case for reaching across the aisle
One thing I’ve noticed when hanging around leftist circles is that they talk about values alignment as well. Once you get past the completely different vocabulary, they too are trying to figure out how to deal with large superhuman entities optimizing for a goal where humans are irrelevant. It’s just that what they’re talking about is global, publicly traded megacorps optimizing for money.
Publicly held companies have the terminal value of accumulating as much money as possible. This is called “shareholder value maximization”.
Even if most people inside the organization may value other things, their job description is to contribute to the ultimate goal of maximizing money, and they are paid and incentivized to do so. In this they create procedures and policies, and then tell their sub-employees to execute them.
Procedures and policies subsume the people performing them, especially on the lowest level, which means that they could be seen as programs being executed manually. Of course, humans are not silicon; running a procedure using humans as both computing substrate and world-manipulators is slow and imperfect. However, I believe the analogy still holds.
The ways in which this shareholder value maximization has already seriously damaged the world and compromised the quality of human life are myriad and easily observable: pollution, climate change, and other such externalities. Companies’ disregard for human suffering further enhances this comparison.
In conclusion I believe that the “friendly AI” problem has enough similarities with the “friendly multinational megacorporation” problem that some cross-pollination could be productive. Even if most of their ideas are implausible for use with an AI, the fact that they also have thoughts related to the creation of superhuman agents and the ethics thereof is still worth looking at.