7. Iterate the Game: Racing Where?

We started this book with the observation that different players of Civilization value things differently. The game of Civilization emerges from the composition of these valuations. The strategy of voluntary cooperation serves each player a great deal. We should uphold it as new AI players enter the game. If we succeed and get better at the game, what awaits in future rounds of play?

Robin Hanson suggests that:

“… in the distant future, our descendants will probably have spread out across space, and redesigned their minds and bodies to explode Cambrian-style into a vast space of possible creatures. If they are free enough to choose where to go and what to become, our distant descendants will fragment into diverse local economies and cultures. Given a similar freedom of fertility, most of our distant descendants will also live near a subsistence level. Per-capita wealth has only been rising lately because income has grown faster than population. But if income only doubled every century, in a million years that would be a factor of 103000, which seems impossible to achieve with only the 1070 atoms of our galaxy available by then. Yes, we have seen a remarkable demographic transition, wherein richer nations have fewer kids, but we already see contrarian subgroups like Hutterites, Hmongs, or Mormons that grow much faster. So unless strong central controls prevent it, over the long run such groups will easily grow faster than the economy, making per person income drop to near subsistence levels.”

In this Malthusian future, civilization, left to emergent phenomena, leads to a race to subsistence. Much of our planet’s history from bacteria to civilization occurred at subsistence levels. If being far from subsistence is exceptional, we shouldn’t expect an imaginary future where most activity is far from it. More efficient activity outcompetes less efficient activity. In other words, we are racing toward competitive equilibria, which is at subsistence levels.

Competitive Equilibria: Subsistence without Suffering

Should we be worried? Only if subsistence means suffering. We find subsistence intuitively repugnant because our past history causes us to equate subsistence with suffering. Given the increasing automatability of manual tasks, future activity will not involve back-breaking physical activity but rather knowledge-based work. Does knowledge work have to entail suffering? Let’s contrast two options:

One is that the kind of dominant computational work required to sustain our future doesn’t require cognition. Instead, cognition is just a distraction from the computational machinery. In that case, subsistence activity won’t be cognitive activity. There is simply no suffering because there is nothing to experience it. Existing bacteria have at least a hundred times the mass of all human beings. Insofar as their activity is at subsistence, most current activity is already at subsistence. And we are not worried by it.

The other option is that to be an efficient knowledge worker, you need cognition. In this case, the idea that suffering knowledge workers have an efficiency advantage over happy knowledge workers contradicts everything we know about knowledge work. In Age of Em, Hanson lays out a possible future of human-brain emulations who do much work at subsistence, where subsistence is not suffering, but instead involves living rich lives in VR.

This scenario is still rather conservative in assuming current human cognition as a constraint. It could be that non-human cognition will dominate human cognition, and that altered human cognition will dominate unaltered human cognition. Alteration could entail transferring a precisely literal human cognition into a VR environment. However, one could equally imagine cognition that finds the activities it engages in fulfilling because they are useful.

Our concern about subsistence for future non-human intelligences is well-intentioned, but our intuitions about subsistence are about creatures suffering. What pushes most activity toward subsistence is the evolutionary logic that whatever activity uses resources more efficiently becomes most of the activity to subsist on. If cognition doesn’t use resources most efficiently, it may not become the dominant activity. If the dominant activity is cognitive, there is nothing about suffering that makes it a more efficient resource user. In neither scenario must there be suffering at subsistence.

Pick Pockets Away from Subsistence

There will always be pockets away from subsistence. If humans enter into the period of rapid future growth, some of us will choose to expand to subsistence in order to produce more output. Others will choose to remain within the bubble of surplus rather than growing at the margins. Those who grow as fast as possible will have descendents constituting more of the overall aggregate activity. Most of those descendents will return to being at subsistence. But the scale of the pockets of surplus can be magnitudes larger than our entire world, even if they are a minority of the universe.

Subsistence is not necessarily bad. Overall activity is in itself a kind of wealth. So more overall cognition is a kind of wealth, just like having surplus is a kind of wealth. Which kind of wealth we think is a better trajectory for our future goes back to what we value most.

There is no objective determination. A system of voluntarism gives everyone who enters into that rapid growth period a good place from which to choose the path they value. Entities at subsistence and those not at subsistence alike may find they benefit from upholding a system allowing them to be independent from each other or to cooperate to achieve their goals. A vast universe of billions of cognitive creatures (or, to avoid speaking of discrete creatures, a billion times more overall cognition) is possible in which most cognition is at subsistence. This still makes everything that we can experience with our current selves pale in comparison.

A Descriptive vs. Prescriptive Attitude to the Future

It is dangerous to overestimate our knowledge or underestimate our ignorance. We might know something about the physics of the rest of the universe via astronomy and cosmology as well as reasoning about computational limits. But the utility of resources when deployed by future intelligences that are incomprehensible to us is itself incomprehensible to us. We are ignorant of their needs and wants. Does this mean we should take a rather descriptive stance to the future?

The framework that created our current cooperative architectures emerged in a spontaneous and decentralized way. We began this book by showing how, from the prevalence of violence, an increasingly voluntary civilization emerged. But then we deviated from this spontaneous order perspective in later chapters by proposing a physical enforcement mechanism to uphold voluntarism in the face of powerful dual-use technologies.

Perhaps the evolution of voluntary interaction frameworks is itself something that we should trust future intelligences, human or not, to figure out for themselves? Insofar as this book attempts to provide an alternative to locked in futures, are we making the same mistake by promoting any specific mechanisms?

Deferring action to future generations could be preferable under the assumption that they have a choice in the matter. But with automation as the main driver of violence, the destructive potential for violent negative sum tragedies has grown tremendously. Computer insecurities make our civilization’s very foundations vulnerable. AI risks winner-take-all scenarios with one player dominating everything else. Soon, those who can may race to consume the universe. Even the prospect of these scenarios creates first-strike instabilities to destroy potential competitors in their path.

We need to act if we want future generations to be able to make any choices at all. Recognizing the dangers, we may arrive at a negotiated solution that more resembles our existing massively multiplayer civilization agreement. Whatever we do, we do within a game left to us by prior generations. Even if we do “nothing”, we endow future generations with a strategic set of relationships with payoffs and the potential for players to make violent and nonviolent moves. We have come full circle to the start of this book: we can’t exempt ourselves from creating the game within which future players decide.

There is no reason to think that the game for future generations will be a better one if we do not try to influence what it will be. There is reason to believe that by trying to do a good job, we can leave them with a game that, when iterated, results in a better situation than if we had not tried. We are actually much better off because our ancestors succeeded at imposing a game on us. The U.S. Founding Fathers set up a game that, when iterated, resulted in a world in which we are leading better lives than if they had not tried. There is much they couldn’t and didn’t anticipate, but nevertheless they got some fundamental principles right. We can and should work to determine and implement what it would take to leave the next iteration with a better game.

Future Generations’ Seats at the Table

What hope is there that the future interests of vastly greater intelligences will uphold our negotiated arrangements?

On the surface, future generations do not have seats at the negotiation table. Whatever current players can come to agreement on becomes the initial game state inherited by future generations. Nevertheless, future players do have a seat in that we of the present care about their interests. It is not just that we want them to get more of what they want. We also understand that strategic instabilities can lead to non-voluntary interaction in order to bring about a different game. Given current weapons, this could instantaneously eliminate entities whose continued existence we value. We want to avoid many future players having enough regrets about the game that they ultimately believe their best interests are served by violently overthrowing it.

We need to make arrangements good enough that using them has greater expected value to future players than taking a chance at overthrowing them—and ideally, good enough that they are immensely better off than they would be without the system. If future generations can most effectively pursue their goals by upholding our endowed arrangements, they will keep using them as Schelling Points.

Values & Voluntarism in Future Games

We should approach future intelligences that will make up most of the universe’s cognition without making assumptions beyond very general universal principles, such as their making choices in the service of their goals. Within this constraint, the best we can do to enable future entities to solve their problems is to set up architectures for voluntary cooperation. But ultimately, future intelligences will design their own cooperative arrangements. These should not be bottlenecked by human designers.

A rich variety of games, interactions and arrangements will be played simultaneously in many different ways. Some will end up stuck in traps that players cannot figure out how to escape. Given enough complexity and diversity, the games that grow and build wealth won’t get stuck. The ones that do will just become a smaller and smaller fraction of the overall system. The system’s growing wealth, complexity, and cognition emerge from the games that didn’t get stuck. Having seen voluntarism emerge without planning, across very different systems, from software architectures to institutions, gives us reason to believe a similar future is at the very least possible. However, future intelligences will also engage in ever richer incremental design.

Voluntary systems depend on drawing boundaries movable by voluntary agreement. On a base level, voluntarism with regard to drawing a boundary around our corporeal bodies has become non-negotiable. But every right beyond this is just convention on different layers of abstraction. The notion of property theft being something involuntary only occurs within a level of abstraction where we have the convention of those assets being actual property. Soon, space resource division will extend the notion of voluntarism into other resources with no single unambiguous path ahead.

Even what we mean by “voluntarism” is not written in stone, but emerges from negotiation. Voluntarism itself doesn’t give us a framework of rights, but we define it such that whatever rights framework we develop in order to coordinate is the framework for the emergent extension of voluntarism.

We cannot rely on philosophy to derive what counts as voluntary in future levels of the game. It becomes a question of agreed threshold setting, i.e. of creating Schelling Points for legitimacy. A Schelling Point can result in crudely incorrect judgments for any particular case. Nevertheless, we can still be better off holding to a simple standard because its very simplicity is what gives it stability. 18 years of age is frequently—but not always— the threshold for being a voluntarily consenting adult. There is no obvious, non-arbitrary standard that works better than just agreeing to this arbitrary standard. Anything that tries to be more accurate will necessarily be less simple. Common knowledge of the expectation of legitimacy is the ultimate governance.

Stable voluntary boundaries across entities are fundamental to cooperative interaction in networks of entities making requests of other entities. Because voluntary boundaries enable independent innovation on both sides of the boundary, our descendants might very well invent other coordination points. In the voluntary Paretotropian framework of “I value what I value, you value what you value, let’s cooperate”, we choose an arrangement that sets initial conditions, leaving the outcome adaptive to future knowledge.

With nothing less than our future civilization as the outcome of the games we set up, it seems incredibly important to get the initial conditions right. Or does it? Our norms emerged from iterated games shaped by initial conditions. The game we inherited determined the vantage point from which we design the next moves. Whatever constraints we now put in place will give rise to strategies that will grow into the norms and values of future generations to come.

If there is no position outside of the game from which to evaluate the game? is it all relative? Not necessarily. We can still point to a vector that sets a trajectory through a very complicated space. To the extent that we succeed in thinking through our next move, we believe that choosing our next actions along a planned trajectory will have a better than random correlation with the norms that emerge in the universe descendant from those choices.

If we simply valued minimizing suffering, we could set up a future that succeeds at doing so, for instance by going extinct. If we value growth of cognition, creativity, and adaptive complexity, there are different, more complicated choices to make. In this book, we suggested that intelligent voluntary cooperation is a good heuristic for choosing amongst this set of choices and proposed a few moves for the next game iterations.

Chapter Summary

We have reason to believe that setting up the game as we have discussed in this book brings a better future than if we don’t try. We uphold a system that enables increasingly valuable arrangements by making sure all parties have a stake in the game. We can do this by continuing to improve our system of voluntary cooperation to include other sentient, artificial, and alien intelligences as they are encountered or developed.

Nobody can tell from our current positions on the board where this game will ultimately end. This is a feature; after all, why play if you know the outcome? What we can do is set up the board so that our descendants and our future selves can discover these wonders for themselves.

No comments.