Exploitation and cooperation in ecology, government, business, and AI
Ecology
An article in a recent issue of Science (Elisa Thebault & Colin Fontaine, “Stability of ecological communities and the architecture of mutualistic and trophic networks”, Science 329, Aug 13 2010, p. 853-856; free summary here) studies 2 kinds of ecological networks: trophic (predator-prey) and mutualistic (in this case, pollinators and flowers). They looked at the effects of 2 properties of networks: modularity (meaning the presence of small, highly-connected subsets that have few external connections) and nestedness (meaning the likelihood that species X has the same sort of interaction with multiple other species). (It’s unfortunate that they never define modularity or nestedness formally; but this informal definition is still useful. I’m going to call nestedness “sharing”, since they do not state that their definition implies nesting one network inside another.) They looked at the impact of different degrees of modularity and nestedness, in trophic vs. mutualistic networks, on persistence (fraction of species still alive at equilibrium) and resilience (1/time to return to equilibrium after a perturbation). They used both simulated networks, and data from real-world ecological networks.
What they found is that, in trophic networks, modularity is good (increases persistence and resilience) and sharing is bad; while in mutualistic networks, modularity is bad and sharing is good. Also, in trophic networks, species go extinct so as to make the network more modular and less sharing; in mutualistic networks, the opposite occurs.
The commonsense explanation is that, if species X is exploiting species Y (trophic), the interaction decreases the health of species Y; and so having more exploiters of Y is bad for both X and Y. OTOH, if species X benefits from species Y, X will get a secondhand benefit from any mutually-beneficial relationships that Y has; if Y also benefits from X (mutualistic), then neither X nor Y will adapt to prevent Z from also having a mutualistic relationship with Y. (The theory does not address a mixture of trophic and mutualistic interactions in a single network.)
The effect is strong—see this figure:
This shows that, when nodes have exploitative (trophic) relationships, and you simulate evolution starting from a random network, the network almost always becomes more modular and less sharing over time; while the opposite occurs when nodes have mutually-beneficial relationships. (The few cases along the line y=x are, I infer, not cases where this effect was weak, but cases where the initial random network happened to be one or two species away from a local equilibrium.)
Government
Armed with this knowledge, we can look at the structure of different cultures, governments, and religions, and say whether they’re likely to be exploitative or mutualistic. Feudalism is an extremely hierarchical, compartmentalized social structure, in which every person has one trophic relationship with one superior. We can look at its org chart and predict that it’s exploitative, without knowing anything more about it. The less-hierarchical, loopy org chart of a democracy is more compatible with mutualistic relationships—at least at the top. Note that I’m not talking about the directionality of control the relationships, as is usual when discussing democracy; I’m talking about the mere presence of multiple relationships per party. The Catholic church has a hierarchical organization, and is perhaps not coincidentally richer than any Protestant church relative to income per capita—except for the Mormons, with assets of about $6000/member, whose organizational structure I know little about (read this if interested). I do know that the Mormon church historically combined church and state, thus halving the number of power relationships its citizens participate in.
The governmental structure of a democracy is not dramatically different from the structure of a monarchy. What’s really different is the economic structure of a free market, with many more shared relationships when compared, for instance, to monopolistic medieval economies, or mercantilistic colonial economies. It may be that the free market, not democracy, is responsible for our freedom.
Business
The employer-employee relationship appears trophic. Employees are forbidden from working for more than one employer. Consultants, on the other hand, have many clients. So do doctors and lawyers. Not surprisingly, all of them get paid more per hour than employees.
Even if you’re an employee, you can compare the internal structure of different companies. Every person within a company of selfish agents would, ideally, like their relationship with others to be exploitative, but for all other relationships to be mutualistic. The company owner would like to exploit the management and the workers, but have the management and workers have mutualistic interactions; while the management would prefer to exploit the workers. You may be able to look at the internal structure of a company, and see how far down the exploitative pattern penetrates. If it’s a hierarchy of private fiefdoms all the way down, beware.
Artificial intelligence
Any artificial intelligence will have internal structure. Artificial intelligences, unlike humans, do not come in standard-sized reproductive units, walled off computationally; therefore, there might not be cleanly-defined “individuals” (literally, non-divisible people) in an AI society. But the bulk of the computation, and hence the bulk of the potential consciousness, will be within small, local units (due to the ubiquity of power-law distributions, the efficiency of fractal transport and communication networks, and the speed of light). So it is important to consider the welfare of these units when designing AIs—at least, if we intend our initial designs to persist.
A hierarchical AI design is more compatible with exploitative relationships—even if it is bidirectional. Again, control is not the issue; the mere presence of links is. A decentralized agent-based AI, in the sense of agent-based software (often modelled on the free market, with software agents bidding on tasks), would be more amenable to mutualistic relationships.
A final caution
The work cited shows that having exploitative vs. mutualistic interactions causes compartmentalized vs. highly-shared networks to arise. It does not show that constructing compartmentalized or highly-shared networks causes exploitative or mutualistic interactions, respectively, to arise. This would be helpful to know; but remains to be demonstrated. For intelligent free agents, an argument that it would is that, when an agent has many relationships, they can cut off any agents who become exploitative. (This might not be true within an AI.)
Finally, as just noted, plants and insects are not intelligent agents; and AI components might not be completely free agents. Each of the domains above has important differences from the others, and results might not transfer as easily as this post suggests.
PhilGoetz:
This is a completely inaccurate use of the term “feudalistic.” The rigid hierarchy of the Catholic Church is extremely dissimilar to the European medieval social order that’s commonly called “feudal,” in which local lords had a level of autonomy and autarky unimaginable by modern standards.
A Catholic priest who defies his bishop or other superior will lose his position promptly, and the same will happen to a bishop who defies the pope. Control and discipline are enforced tightly at each level, and the hierarchy is staffed by men from lower levels who get promoted and appointed by the central authority (except for the elective pope, of course, and with some rare peculiar semi-autonomous local institutions due to accidents of history). In contrast, a feudal lord ruled his fief for life as his own property, and left it to his heirs after death—while his overlord, or even king, had no control whatsoever over his day-to-day affairs, and could only demand the regular tribute. Even in cases of open defiance, it was by no means certain whether the king would be able to get his way. This fragmented world of extreme local autonomy and autarky was the polar opposite of the modern tightly disciplined Catholic hierarchy.
Generally speaking, “feudalism” is one of those terms that are often thrown around casually and without any regard for historical accuracy, to the point where they’ve become nearly meaningless (kind of like “fascism”). Whenever you feel tempted to use it for the purpose of making historical parallels, you should stop and think carefully whether it makes sense.
That’s a good point. Thanks for the correction. The relationships in the work cited don’t closely approximate either kind of relationship, so I don’t think the correction has a predictable change on the application here.
So really more like a corporation than a feudal empire.
What is a “feudal empire”? Can you give an example?
The most accurate meaning of this term would be a situation where numerous local lords are powerful and autonomous, but there is one among them who commands disproportionately large resources and is capable of raising overwhelmingly powerful military forces, either directly from his own personal domains or from his loyal vassals.
In this situation, any lord who defies the monarch openly can be subdued by sheer military force, so if the monarch successfully advertises his military power and his commitment to lash out whenever provoked, there can be a stable equilibrium where local lords find it in their best interest to be loyal vassals, profess allegiance, and pay their tribute in a timely manner—and otherwise be left alone to rule their fiefs. Another factor that can strengthen this equilibrium is if the monarch’s military power provides protection against an external threat that is too powerful for the lords to handle individually; in such situations, the monarch can be more of a coalition leader than overlord.
Clearly, such an equilibrium is unstable for many reasons. External military threats can disappear, a strong monarch can be succeeded by a weak one who won’t be able to insist on his supremacy credibly, local lords can become powerful to the point where defiance seems tempting, a neighboring ruler can offer a better deal for those who switch allegiance to him, several lords can form a coalition too powerful to subdue, and so on. The classic example is the history of the Frankish Empire and the Holy Roman Empire. Occasional exceptionally capable and powerful rulers were able to assert strong personal authority, but their heirs would regularly fail to uphold it.
The Holy Roman Empire is the obvious example. But really I was just being careless in writing ‘empire’.
Granted that the Catholic Church hierarchy is not feudalistic. But this suggests the question: during the height of European feudalism, the Catholic Church itself was—what? Rigidly hierarchical even then? Or did it in some way partake of the feudal lack of hierarchy and center?
Rigidly hierarchical in theory, but forced to make political compromises occasionally in reality.
See the investiture controversy, for instance.
The question of Church governance and its relation to the secular authority was the number one hot-button political issue during the European Middle Ages, over which many intellectual, political, as well as military battles were fought. It’s a vast and fascinating topic that spans several centuries of complicated history, with changing fortunes on all sides; to get a basic taste of it, this article on the Investiture Controversy is decent.
These controversies exploded again during the Reformation and the subsequent religious upheavals and wars that engulfed Europe in the 16th and 17th centuries, and they haven’t died down completely to the present day. But as a simplification (perhaps excessive), one could say that the present tightly disciplined form of Catholic Church governance developed during the Counter-Reformation period.
(It should also be noted that some local Catholic churches, most notably the Eastern ones, have much more autonomy for peculiar reasons of local history. Formally, this is known as the sui juris status.)
Feudalism is hierarchical. Vladimir is talking about the high level of autonomy of each boss in the hierarchy. Even kings did not have the absolute power we usually think of kings as having; the Holy Roman Empire being an extreme example of this, in which IIRC the Emperor was usually less powerful than any of his immediate subordinates, and served more as a balancing force or referee than as a supreme ruler.
I think that it’s not so much whether you remember correctly as which emperor you mean. The HRE lasted for nearly 1000 years, and the power of emperor varied a lot over this time.
To make things even more complicated, besides their imperial title, Holy Roman emperors typically had a whole bunch of titles over different lands within the Empire (and sometimes even outside of it), whose significance in terms of actual control ranged from purely theoretical to very real. Their ability to assert their imperial authority across the Empire heavily depended, among other things, on the ability to draw resources from the specific lands they controlled more tightly.
Modularity sounds like it would refer, not so much to exploitation in itself, but to federalism. A society of small towns, where everybody knows your name and everybody minds each other’s business, is modular. Feudalism is also modular: every lord with his vassals is an independent unit. Connections are few, tight, and exclusive. Some people see this as a political ideal: local autonomy, life revolving around small modular units like the family, the church, the lodge.
Nestedness seems to refer to repeated arrangements—nodes with many edges. People with many friends, many clients, and so on. This notion is a little confusing to me, because it seems to be a property of nodes, rather than of graphs. A star graph has one central node with a lot of neighbors, but all the other nodes are just connected to the center. So is that a highly nested graph, or not?
Just guessing here at a more formal definition, let’s let nestedness be the average number of neighbors of each node. That’s a measure of how thickly connected the population is, and it makes a rough sort of opposite of modularity.
In that case, a nested society is like an urban society, or like the internet. Each person’s circle is wide. Each person has many options. Ideas and conventions are very global, because they can spread pretty much everywhere.
But note that the definition of nestedness matters a lot. Do we want the average? The median? The maximum? The minimum? If we have some very highly connected nodes, and others much less connected, then we have centers of power or influence. It has the global quality of a highly nested graph, but it doesn’t guarantee that each person has a wide circle of interactions. Most nodes are dependent on very few, very powerful nodes. (What you think about capitalism will depend a lot on whether you see the economy as closer to a star graph or a complete graph.)
Thinking a bit more about your government-related examples, it seems like one problem is that you don’t specify how exactly the notion of “exploiting” translates from the animal world into human relations. Those forms of exploitation that are a clear analogy of animal predatory behavior (e.g. robbery and plunder) are normally illegal in any organized human society and done openly only by rogue criminals. When they’re done by organized and persistent structures, rather than outlaw individuals, they’re typically given a pretense of a mutually beneficial relationship (e.g. extortionists claiming to sell “protection”).
Now, the question is: since the social arrangements that appear exploitative by some criteria will normally be backed by at least some theoretical pretense of mutual benefit, how can we discern to what extent such pretenses are false in each particular case? Moreover, since it’s unlikely that any human relations will be purely exploitative or mutualistic in any meaningful sense, how to devise a reasonable measure by which we can rate concrete arrangements on this scale, without any unjustified subjective judgments of whose situation is better or worse? In any case, it seems to me that the approach taken in your “Government” section is too simplistic to be useful.
One idea in my essay is that it’s easier to look at the structure, and see what type of relationship it’s compatible with, than to evaluate how exploitative the relationship is. Feudalism’s relationships were claimed to be mutually beneficial. You could spend a lot of time arguing whether that was the case; or you could just look at the structure, and say, “Hmm, evidence against.”
Evidence against what, exactly? My point is that compared to the typical conflicting relationships between living things studied by biologists, which can often be accurately described in terms of the standard patterns of predator/prey, parasite/host, etc., human relationships are usually too complex to make correct analogies with such simple patterns. To take a prominent example, simplistic biological analogies between human societies and non-human species have traditionally been a rich source of mind-killing political propaganda—just think of various occasions when some identifiable group was called “parasitic” by their political enemies.
Therefore, if you want to analyze feudalism or some other historical social order in terms of analogies with non-human species, you should explain why you believe that the analogies are applicable. I’m not dismissing your basic idea as fundamentally unsound—but I do believe that humans represent a very large evolutionary step over other species, enough to make many universal rules about non-human organisms inapplicable to humans, or applicable only under complex conditions and assumptions. In particular, it seems to me that while the notions of “exploitation” versus “mutual benefit” are fairly easy to define for (most?) non-human species, the way they should be generalized to human societies is not at all obvious.
They did have benefits, and those benefits seem to fit in nicely with the model you presented. What the exploiter (lord, baron, earl, king, etc) gives to the subject is protection from other exploiters who may be worse or who at very least will be ‘more’. Even enforcing laws would just be modelled here as preventing exploitive relationships. If Y is exploited by X then X will benefit from killing potential exploiter Z. This is a ‘mutual benefit’ in the X-Y relationship but it does not suggest ‘mutualistic’, in the defined sense.
And, of course, ‘taxation’. It does seem that all the exploitive groups (such as the government and mobs) put a lot of work into preventing other groups from having similar relationships with their prey.
One simple heuristic i could think of is what would happen the “prey” agents in the system gained a small increase in intelligence/optimization power. Does the relationship increase in quantum or decrease? If its exploitative, it would decrease, if it is mutually beneficial, it would increase.
I wonder if I’m misunderstanding something, or if you are.
Imagine a totalitarian society with one supreme leader. Drawing a simple graph of that, there’s one node at the top, connected to many nodes at the bottom. A tree. It seems to me that a tree is neither nested nor modular. If you want to identify freedom with nestedness, and autocracy with modularity, doesn’t this pose a problem?
First, “nestedness” is associated with being mutually beneficial; freedom is just one possible benefit.
I don’t know how your star graph would be measured. If the authors had published their definitions of nestedness and modularity, then I could compute the answer; but they didn’t. But I wouldn’t worry about that graph; it’s ambiguous because it’s small and simple; and it’s unlikely to describe any society, even a tribal one, for the same reason.
The issue isn’t the single special case of the star graph. If you have a graph that is very centralized—a few “central” nodes have a lot of neighbors, and most of the edges are between “central” and “periphery” nodes—that’s not a star, but it’s “star-like.” Are such graphs nested or aren’t they?
I would imagine that certain kinds of exploitative human relationships are “star-like.”
Yes, definitely. Sorry, I don’t know the answer. The authors didn’t provide an email address, and neither of their University homepages have any contact information. They provided actual physical addresses, but I’m far too lazy for that.
Physics is local. The speed of light is a derivative of that general principle. The local nature of our universe implies some strict limits on intelligence. Curiously, it looks like the only way to transcend these limits (to get a really powerful single intelligence/computer) is to collapse into a black hole, at which point you necessarily seal yourself off and give up any power in this universe. Interesting indeed.
But I have no idea how you leap to the conclusion “there is therefore no reason to expect individuals to exist in a post-AI society.” Although partly because I dont know what a post-AI society is. I understand post-human .. but post-AI? Is that the next thing after the next thing? That seems to be getting ahead of ourselves.
Also, you seem to reach the conclusion that there will not necessarily be any individuality in the ‘post-AI’ future society, but then give several good reasons why such individuality may persist. (namely, speed of light, locality of physics)
But what is individuality? One could say that we are a global consciousness today with just the “bulk of computation” in “small, local units”.
I’m not sure I follow this. A purely Newtonian universe with no gravity (to keep things simple) would have completely local laws and no speed of light limit.
When you say “[a] purely Newtonian universe with no gravity,” do you mean a universe in which light doesn’t exist at all as a trivial counterexample to the above claim? Or do you actually have in mind some more complex point?
I was interpreting speed of light in this context to mean that there’s a maximum speed in general otherwise the claim becomes trivially false. In that regard, the claim isn’t true and one could make a universe that was essentially Newtonian, had some sort of particle or wave that functioned like light that didn’t move instantaneously but could move at different speeds. (Actually now that I’ve said that I have to wonder if the post I was replying to meant that locality implied that light always had a finite speed which is true.) I suspect that you can get a general result about a maximum speed if you insist on something slightly stronger than locality, by analogy to the distinction between continuous functions and uniformly continuous functions but I haven’t thought out the details.
Oh, I see. Thanks for the explanation.
From context, I believe “post-AI” means after AI “occurs”; that is, after the beginning of AI, or during the period in which AI is a major shaping force.
Ahh of course. For some reason I couldn’t interpret that other than as a miswritten ‘posthuman’.
I don’t think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like—provided you are prepared to give up some small space to the active support system and are safe from power cuts.
The general idea is that because of the speed of light limitation, a computer’s maximum speed and communication efficiency is always inversely proportional to its size.
The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd’s limits of computation paper for the details.
Any old hum-dum really big computer wouldn’t have to collapse into a big hole—but any ultimate computer would have to. In fact, the size of the computer isn’t even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.
What about the uncertainty principle as component size decreases?
look up seth lloyd and on his wikipedia page the 1st link down there is “ultimate physical limits of computation”
the uncertainty principle limits the maximum information storage per gram of mass and the maximum computation rate in terms of bit ops per energy unit, he discusses all that.
However, the uncertainty principle is only really a limitation for classical computers. A quantum computer doesn’t have that issue (he discusses classical only, an ultimate quantum computer would be enormously more powerful)
What is the problem with whoever voted that down? There isn’t any violation of laws of nature involved in actively supporting something against collapse like that—any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?
hyper-large structures are hyper-slow and hyper-dumb. See my above reply. The future of computation is to shrink forever. I didn’t downvote your comment btw.
Yes—it’s a matter of degree. Humans are isolated to a greater extent than I expect AIs to be. Also, the word “individual” means (I think) non-divisible, which AIs probably will not be.
I agree, the term post-AI is confusing. I’ll remove the ‘post’.
I get the feeling you intend this to be taken as a pun for relationships between the AI and us, although of course you didn’t explicitly suggest such a link.
No, that’s not my intent at all. I’m speaking of relationships between the different components of the AI. The goal is to design an AI whose components will be more likely to have a good life.
What is the difference between non-nested and modular? (Or between non-modular and nested?)
The pictures seem to be rotated by 180 degrees essentially.
Given a finite graph you can define a characteristic of that graph that for our purposes is called “modularity.”
For all integers N, consider all of the ways that you can define a partition of the graph into N subsets. For each partition, divide the size of the smallest partition by the number of connections between the different components. Find the partition where this number is maximal, then this maximal ratio is the “N-modularity” of the graph. That is, if the number is very high, there is a partition into N blocks which are dense and have few connections between each other; we can call these modules.
Not sure about how to define nested, but I imagine it has to do with isomorphic sub-graphs; so if each of the N modules of a graph had the same structure, the graph would be nested as well as modular.
But I’m less confident about the nested definition.
that would work great.
Modular and nested are not opposites. “Nested”, they say, means a sharing of relationships; they’re not any more specific than that. Don’t know what you mean about being rotated by 180 degrees. Consider the lower-left picture: It shows modularity in mutualistic (cooperative) relationships. All the points are below the line y=x because the initial measure of modularity was larger than the equilibrium measure of modularity.