Large corporations are not really very like AIs at all. An Artificial Intelligence is an intelligence with a single utility function, whereas a company is a group of intelligences with many complex utility functions. I remain unconvinced that aggregating intelligences and applying the same terms is valid—it is, roughly speaking, like trying to apply chromodynamics to atoms and molecules. Maximising shareholder value is also not a simple problem to solve (if it were, the stock market would be a lot simpler!), especially since “shareholder value” is a very vague concept. In reality, large corporations almost never seek to maximise shareholder value (that is, in theory one might, but I can’t actually imagine such a firm). The relevant terms to look up are “satisficing” and “principal-agent problem”.
This rather spoils the idea of firms being intelligent—the term does not appear applicable (which is, I think, Eliezer’s point).
I’d say “artificial” is probably the wrong word for describing the intelligence demonstrated by corporations. A corporation’s decision calculations are constructed out of human beings, but only a very small part of the process is actually explicitly designed by human beings.
“Gestalt” intelligence is probably a better way to describe it. Like an ant-hill. Human brains are to the corporation what neurons are to the human brain.
I doubt one could say with any confidence that they are universally “smarter” or “dumber” than individual humans. What they are is different. They usually trade speed and flexibility of calculation for broader reach of influence and information gathering. This is better for some purposes. Worse for others.
Corporations do not have utility function, or they do not have a single utility function. They have many utility functions. You might “money pump” the corporation.
Super Intelligence = A General intelligence, that is much smarter than any human.
I consider my self to be an intelligence, event though my mind is made of many sub-processes, and I don’t have a stable coherent utility function (I am still working on that).
The relevant questions are:
It is sometimes useful to model corporations as single agents? - I don’t know.
Are corporations much smarter than any human? - No, they are not.
I say “sometimes useful”, because, some other time you would want to study the corporations internal structure, and then it is defiantly not useful to see it as one entity. But since there are no fundamental indivisible substance of intelligence, any intelligence will have internal parts. Therefore having internal parts can not be exclusive to being an intelligent agent.
Large corporations are not really very like AIs at all. An Artificial Intelligence is an intelligence with a single utility function, whereas a company is a group of intelligences with many complex utility functions. I remain unconvinced that aggregating intelligences and applying the same terms is valid—it is, roughly speaking, like trying to apply chromodynamics to atoms and molecules. Maximising shareholder value is also not a simple problem to solve (if it were, the stock market would be a lot simpler!), especially since “shareholder value” is a very vague concept. In reality, large corporations almost never seek to maximise shareholder value (that is, in theory one might, but I can’t actually imagine such a firm). The relevant terms to look up are “satisficing” and “principal-agent problem”.
This rather spoils the idea of firms being intelligent—the term does not appear applicable (which is, I think, Eliezer’s point).
I’d say “artificial” is probably the wrong word for describing the intelligence demonstrated by corporations. A corporation’s decision calculations are constructed out of human beings, but only a very small part of the process is actually explicitly designed by human beings.
“Gestalt” intelligence is probably a better way to describe it. Like an ant-hill. Human brains are to the corporation what neurons are to the human brain.
I doubt one could say with any confidence that they are universally “smarter” or “dumber” than individual humans. What they are is different. They usually trade speed and flexibility of calculation for broader reach of influence and information gathering. This is better for some purposes. Worse for others.
Corporations do not have utility function, or they do not have a single utility function. They have many utility functions. You might “money pump” the corporation.
The only sense in which all AIs have utility functions is a sense in which they are describable as having UFs, in a ‘map’ sense.
How said anything about AI?
Super Intelligence = A General intelligence, that is much smarter than any human.
I consider my self to be an intelligence, event though my mind is made of many sub-processes, and I don’t have a stable coherent utility function (I am still working on that).
The relevant questions are: It is sometimes useful to model corporations as single agents? - I don’t know. Are corporations much smarter than any human? - No, they are not.
I say “sometimes useful”, because, some other time you would want to study the corporations internal structure, and then it is defiantly not useful to see it as one entity. But since there are no fundamental indivisible substance of intelligence, any intelligence will have internal parts. Therefore having internal parts can not be exclusive to being an intelligent agent.