What is the relevance of profit per employee to the question of the power of organizations?
Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren’t that great at it; or they don’t have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?
And why would a machine intelligence not suffer similar coordination problems as it scales up?
For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization—and that’s just one nugatory difference between AIs (uploads or de novo) and organizations.
What is the relevance of profit per employee to the question of the power of organizations?
Corporations exist, if they have any purpose at all, to maximize profit.
For the owners and shareholders though, not for the employees, unless they are all partners. As to why more employees could lead to lower profit per employee. Suppose a smart person running a one-man company hires a delivery truck driver. I’d expect it to happen there. That’s only an example but I think it suggests some hypotheses.
Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren’t that great at it; or they don’t have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?
Ok, let’s recognize some diversity between corporations. There are lots of different kinds.
Some corporations fail. Others are enormously successful, commanding power at a global scale, with thousands and thousands of employees.
It’s the latter kind of organization that I’m considering as a candidate for organizational superintelligence. These seem pretty robust and good at what they do (making shareholders profit).
As HalMorris suggests, that there are diminishing returns to profit with number of employees doesn’t make the organization unsuccessful in reaching its goals. It’s just that they face diminishing returns on a certain kind of resource. An AI could face similar diminishing returns.
I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization—and that’s just one nugatory difference between AIs (uploads or de novo) and organizations.
I agree completely. I worry that in some cases this is going on. I’ve heard rumors of this sort of thing happening in the dormitories of Chinese factory workers, for example.
But more mundane ways of doing this involve giving employees bonuses based on company performance, or stock options. Or, for a different kind of organization, by providing citizens with a national identity. Organizations encourage loyalty in all kinds of ways.
It’s the latter kind of organization that I’m considering as a candidate for organizational superintelligence. These seem pretty robust and good at what they do (making shareholders profit).
As far as I know, large corporations are almost as ephemeral as small corporations.
But more mundane ways of doing this involve giving employees bonuses based on company performance, or stock options. Or, for a different kind of organization, by providing citizens with a national identity. Organizations encourage loyalty in all kinds of ways.
Which tells you something about how valuable it is, and how ineffective each of the many ways is, no?
For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization—and that’s just one nugatory difference between AIs (uploads or de novo) and organizations.
The idea that machine intelligences won’t delegate work to other agents with different values seems terribly speculative to me. I don’t think it counts as admissable evidence.
The idea that machine intelligences won’t delegate work to other agents with different values seems terribly speculative to me. I don’t think it counts as admissable evidence.
Why would they permit agents with different values? If you’re implicitly thinking in some Hansonian upload model, modifying an instance to share your values and be trustworthy would be quite valuable and a major selling point, since so much of the existing economy is riven with principal-agent problems and devoted to ‘guard labor’.
Why would they permit agents with different values?
Agents may not fuse together for the same reason that companies today do not: they are prevented from doing so by a monopolies commission that exists to preserve diversity and prevent a monoculture. In which case, they’ll have to trade with and delegate to other agents to get what they want.
If you’re implicitly thinking in some Hansonian upload model [...]
It’s at least possible that the machine intelligences would have some respect for the universe being bigger than their points of view, so that there’s some gain from permitting variation. It’s hard to judge how much variation is a win, though.
Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren’t that great at it
Huh? 48 billion dollars not enough for you? What sort of profit would you be impressed by?
Why would you think $48b is at all interesting when world GDP is $70t? And show me a largest corporation in the world which manages to hold on for even a few centuries like a mediocre state can...
Why would you think $48b is at all interesting when world GDP is $70t?
Massive profits seem like a pretty convincing refutation of the bizarre idea that corporations aren’t that great at maximising profits to me. Modern corporations are the best profit maximisers any human has ever seen.
And show me a largest corporation in the world which manages to hold on for even a few centuries like a mediocre state can...
Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.
Massive profits seem like a pretty convincing refutation of the bizarre idea that corporations aren’t that great at maximising profits to me. Modern corporations are the best profit maximisers any human has ever seen.
Compared to what?
Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.
Ceteris paribus, long lifespan helps with generating profit: long-lived corporations accumulate reputational capital, institutional expertise, allows more amortizing of long-term investments, etc.
Modern corporations are the best profit maximisers any human has ever seen.
Compared to what?
So: older companies mostly.
Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.
Ceteris paribus, long lifespan helps with generating profit: long-lived corporations accumulate reputational capital, institutional expertise, allows more amortizing of long-term investments, etc.
Death is much less of a significant factor than with humans, since old corporations can be broken up and the pieces sold. It doesn’t matter so much if old corporations die when their parts can be usefully recycled. Things like expertise can easily outlast a dead corporation.
Fair enough. Not sure I see your point though.
What is the relevance of profit per employee to the question of the power of organizations?
And why would a machine intelligence not suffer similar coordination problems as it scales up?
Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren’t that great at it; or they don’t have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?
For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization—and that’s just one nugatory difference between AIs (uploads or de novo) and organizations.
For the owners and shareholders though, not for the employees, unless they are all partners. As to why more employees could lead to lower profit per employee. Suppose a smart person running a one-man company hires a delivery truck driver. I’d expect it to happen there. That’s only an example but I think it suggests some hypotheses.
Ok, let’s recognize some diversity between corporations. There are lots of different kinds.
Some corporations fail. Others are enormously successful, commanding power at a global scale, with thousands and thousands of employees.
It’s the latter kind of organization that I’m considering as a candidate for organizational superintelligence. These seem pretty robust and good at what they do (making shareholders profit).
As HalMorris suggests, that there are diminishing returns to profit with number of employees doesn’t make the organization unsuccessful in reaching its goals. It’s just that they face diminishing returns on a certain kind of resource. An AI could face similar diminishing returns.
I agree completely. I worry that in some cases this is going on. I’ve heard rumors of this sort of thing happening in the dormitories of Chinese factory workers, for example.
But more mundane ways of doing this involve giving employees bonuses based on company performance, or stock options. Or, for a different kind of organization, by providing citizens with a national identity. Organizations encourage loyalty in all kinds of ways.
As far as I know, large corporations are almost as ephemeral as small corporations.
Which tells you something about how valuable it is, and how ineffective each of the many ways is, no?
The idea that machine intelligences won’t delegate work to other agents with different values seems terribly speculative to me. I don’t think it counts as admissable evidence.
Why would they permit agents with different values? If you’re implicitly thinking in some Hansonian upload model, modifying an instance to share your values and be trustworthy would be quite valuable and a major selling point, since so much of the existing economy is riven with principal-agent problems and devoted to ‘guard labor’.
Agents may not fuse together for the same reason that companies today do not: they are prevented from doing so by a monopolies commission that exists to preserve diversity and prevent a monoculture. In which case, they’ll have to trade with and delegate to other agents to get what they want.
That doesn’t sound like me: Tim Tyler: Against whole brain emulation.
It’s at least possible that the machine intelligences would have some respect for the universe being bigger than their points of view, so that there’s some gain from permitting variation. It’s hard to judge how much variation is a win, though.
Huh? 48 billion dollars not enough for you? What sort of profit would you be impressed by?
Why would you think $48b is at all interesting when world GDP is $70t? And show me a largest corporation in the world which manages to hold on for even a few centuries like a mediocre state can...
Massive profits seem like a pretty convincing refutation of the bizarre idea that corporations aren’t that great at maximising profits to me. Modern corporations are the best profit maximisers any human has ever seen.
Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.
Compared to what?
Ceteris paribus, long lifespan helps with generating profit: long-lived corporations accumulate reputational capital, institutional expertise, allows more amortizing of long-term investments, etc.
So: older companies mostly.
Death is much less of a significant factor than with humans, since old corporations can be broken up and the pieces sold. It doesn’t matter so much if old corporations die when their parts can be usefully recycled. Things like expertise can easily outlast a dead corporation.