1. Meet the Players: Value Diversity
Value Diversity is Here to Stay
Civilization is an inherited game shaped by those before you.
You start this game as a product of previous iterations: “Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stranded, stuck fast, untimely wounded, or otherwise deflected from its life’s quest of delivering a tiny charge of genetic material to the right partner at the right moment in order to perpetuate the only possible sequence of hereditary combinations that could result—eventually, astoundingly, and all too briefly—in you.”
Going forward, you must choose among possible games to pass on to your future self and generations.
You could make nihilistic outcomes like this more likely: “In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the highest and most mendacious minute of “world history” — yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die. There have been eternities when it did not exist; and when it is done for again, nothing will have happened.”
Or you could make fantastic futures like this more likely: “Whether anyone else is out there or not, we are on our way. [...] Expansion will proceed, if we survive, because we are part of a living system and life tends to spread. Pioneers will move outward into worlds without end. Others will remain behind, building settled cultures throughout the oases of space [...] Where goals change and complexity rules, limits need not bind us. [...] New technologies will nurture new arts, and new arts will bring new standards. The world of brute matter offers room for great but limited growth. The world of mind and pattern, though, holds room for endless evolution and change. The possible seems room enough.”
Even if you have high hopes for the future, you are faced with the following conundrum: You’re in this game of civilization with players who have a diversity of values. Some values are complementary, some opposing, and some include the destruction of the playing field. As the game continues, the stakes will increase. Soon enough, rapid progress in technologies will give everyone more power, so not playing is not an option. If you hate civilization, your next move is easy. But if you love this game, and want to see it evolve over many iterations, you have a more complicated set of choices to make.
In this chapter, let’s meet fellow civilizational players and figure out how to play this game well.
Striving for Coherence, Built to be Conflicted
What does it mean to play civilization well? A scroll through Twitter shows answers differ widely. No wonder, according to a social intuitionist model for human values: Your actions are mostly based on intuitions, such as a sudden revulsion at an action. Your intuitions are mostly shaped by factors outside of your control. Evolution built you to care about things because that caring structure had survival benefits. Only when prompted to explain them do you invent rationalizations for them, drawing on philosophy, politics and the like. When it comes to reasoned judgment about right and wrong, you resemble “a lawyer trying to build a case rather than a judge searching for the truth.”
The social intuitionist model explains why human intuitions and theories as to what is right or wrong often conflict. But perhaps the tendency to create abstract rationalizations about your intuitions isn’t all that bad. At least from an individual problem-solving perspective, the ability to reflect on your caring structure can help you create an overall narrative aligning your various wants now with those of your future self. You start having reactions to things as an infant, before you can reason about them. But as you wonder about the reasons for these reactions, gain more experience, and live through times of conflict and growth, many of your intuitions will come up for revision.
You could think of ethics as the internal negotiation between which intuitions you want to hold onto as values and which you want to dismiss as biases. You revise what turns out to be futile or incompatible with the rest of your caring structure. In this process, nothing is sacred; your intuitions and even our beliefs change with multiple consistent equilibria among them. Nevertheless, with continuous adjustment of your default caring structure, the emerging set may help you move more towards who you want to be.
You may never fully “solve” your internal conflicts, but that may be just as well. Robin Hanson points out that “if your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn’t turn to suicide.” Thanks to your internal division, even if parts of your mind went suicidal at times, the others are there to keep them in check.
While we like to think of humans as entities with single agency, perhaps we are built to be conflicted. Marvin Minsky pictures our minds as having many internal agents, each having simpler, conflicting preferences. Our adaptive intelligence arises from these agents keeping each other in check. So the next time you beat ourselves up when part of you wants this, part of you wants that, you may take solace that having some internal conflict, rather than perfect alignment, may be more a feature than a bug. Even if you manage to make progress towards a more coherent whole, you should expect residual internal conflict.
Epistemic Humility and Open Minds
Currently, 7+ billion other players are already playing the game of civilization. They all start with different ethical intuitions and theories translating into different caring structures. With imperfect insight and conflict regarding your own caring structures, epistemic humility is advised when addressing theirs.
Currently, other humans are still similar enough that we can sometimes model and shape them. Many of our socially rich interactions rely heavily on this cognitive similarity: We react to people and can use our reactions to them to model their reactions to us. By predicting how others will judge us, we learn to judge ourselves. Adam Smith calls this model the impartial spectator, which continually asks: “if I were in your shoes, seeing me doing what I’m doing, how would I react to me?”
According to Vernon Smith, this impartial spectator is a good shortcut to explain rich human cooperative behaviors that are anomalies with respect to the game theory of that particular interaction. We don’t always cheat, even if we could, and we sometimes punish cheating to no personal benefit. For instance, experiments by Dan Ariely suggest our dishonesty budget when no one is looking is determined by how much dishonesty we can allow ourselves while still maintaining a basically honest self-image. That’s because rather than wanting to act so as to gain praise and avoid blame, our impartial observer makes us want to act in a praiseworthy manner, even if no one sees it.
Knowing about each others’ impartial spectator, and how it is shaped by others’ reactions, generates a rich social account of values: We not only react to the culture around us, but influence everyone else’s values by providing an inspirational example, and by praising and supporting projects and people we admire. All of us, by leading what we think of as good lives, help to form the overall evolution of values that will outlive us.
This is only possible as long as we are similar enough. Just as our ancestors would be shocked by our levels of tolerance, our descendants’ worlds will seem very strange indeed. According to Robin Hanson, rates of social change have sped up with increased growth, competition, and technological change, so we should also expect accelerating value drift over time. With people living longer lives, our descendants may increasingly live with more fellow descendants that are very different to them. As the menu of environments to explore, experiences to have, and biological changes to make expands, our moral intuitions and abstractions may increasingly diverge.
From Value Diversity to Value Drift
The more diverse our civilization gets, the less we may be able to meaningfully model others and their reactions to us. The Silver Rule may provide a good practical heuristic for our interactions for a while. If the Golden Rule is “Do unto others as you would have them do unto you”, the Silver Rule is “Don’t do unto others as you would have them not do unto you.” Currently, our models of what we want done to us may sometimes still work to figure out what others would like to have done unto them. In an increasingly diverse world, the Silver Rule is more appropriate as an epistemically humble heuristic. Figuring out how to avoid harming others is difficult enough without also trying to actively act on their behalf.
Heuristics can be useful but if we want a robust civilizational architecture across the next few rounds of play, we need more reliable frameworks for engagement with different cognitive architectures. Even if we don’t have to worry about meeting alien minds tomorrow, we are actively creating mind-architectures very different from us. Robin Hanson explores a potential future economy in which humans create human brain emulations. These have reduced inclinations for art, sex, and parenting, can change speeds by changing hardware, and create temporary copies of themselves.
This scenario at least assumes humans as a source, but we are also making remarkable progress via software and hardware in AIs that function nothing like the human brain. While current “neural-network” architectures at best crudely mirror some of our brain’s functionality, it is naive to suppose AIs will mirror human caring structures for long.
With diverse values that increasingly drift apart, we can’t rely on aligning players on one grand strategy. Instead, to set civilization up well over the next rounds of play, we must build a playing field that can handle fundamental value differences.
Future minds may not exhibit much of what we call “values” at all, but could be better characterized as “goal-seeking” entities. Nevertheless, as long as they have goals and act as though they make choices, they will have revealed preferences. With less and less comprehension of other players’ values, those revealed preferences may be all we have when designing systems for different players to reach their goals. But they may also be all we need to up a playing field that allows for good games, as judged by each player.
How To Win The Game: Voluntary Cooperation
Let’s look at possible strategies for playing the game of civilization through the lens of revealed preferences:
Bob’s Preferences
Imagine a game of civilization played by Alice and Bob. A world in which different players have different goals can be described in terms of preferences among future states. The center dot is the current state of the world that players Alice and Bob are in. The axes are the world states, organized by Alice’s preferences vertically and by Bob’s preferences horizontally. Bob prefers the green worlds to the current world.
Positive Sum & Negative Sum
If we could extrapolate utilities from Alice’s preferences and Bob’s preferences, we could say their interactions can lead to outcomes that have greater overall utility or smaller overall utility. Meaningfully comparing utilities across players will become more problematic the more diverse their futures get. But for now, let’s assume everything to the upper right of the red line are “positive sum” outcomes, and everything to the left are “negative sum outcomes”.
Voluntary Cooperation
There is a problem with simply seeking positive sum outcomes. If Bob would be worse off than he currently is, he would fight any attempt to get there. Likewise, Alice would fight the positive sum outcomes she likes less than the status quo. But if Alice and Bob are either equally or better off than they currently are, both have good reason to cooperate. Together, they can move to Pareto-preferred worlds. Situation B is Pareto-preferred to situation A if anyone prefers B to A, and no one prefers A to B. Those worlds can be reached by voluntary cooperation.
Cooperation Across Humans
Human similarities also come with the tendency to compare oneself to others, including strong fairness intuitions and envy reactions. If Alice’s gain is perceived as too unfair, only she would be invested into bringing about that future, even if, all else equal, Bob would have consented to the deal. The all-too-human tendency to compare ourselves to others may lead Bob to reject a Pareto-preferred deal. It narrows the scope of what the world’s human players can achieve by voluntary cooperation. [1]
Cooperation Across Intelligences
Traditionally, the definition of an agent with utility assumes a comparability that future intelligent systems don’t necessarily have going forward. Without meaningful metrics on which to compare utility across very different mind architectures, the diagonal red line, indicating positive and negative sum, disappears.
As long as players have goals and act as though they make choices, they will have revealed preferences. Those revealed preferences may be all we have when designing systems for players to reach their goals.[2] Upholding voluntary cooperation could remain a stable common goal for both Alice and Bob across many rounds of future games, regardless of their intelligence. It’s all they need to unlock Pareto-preferred worlds that are better for each by their standards. The rest of this book is about how to set civilization up for this path of intelligent voluntary cooperation.
In particular, we’ll explore paths that
Uphold voluntarism across humans
Improve cooperation across humans
Do so in light of other intelligences
Here’s a preview of the underlying principles at work:
Zooming in on Voluntarism
Involuntary Positive Sum
Imagine that Alice, seeking a positive sum arrangement that makes her vastly better off, explains to Bob: “I’ll be more better off than you’ll be worse off” and embarks on her way to the blue point in the diagram. Bob doesn’t like this plan so we have a conflict. Not only do we have a conflict, but Alice expects that there will be this conflict and Bob expects that Alice will expect the conflict.
Hobbesian Traps
In expectation of Alice’s involuntary action, Bob may strike first. Alice, expecting this, will want to weaken Bob first. A cascade of mutually expected conflict can result in a Hobbesian Trap, where the mutual expectation of conflict creates a preemptive conflict. While cooperation is better for both sides, lack of trust or fear of defection can lead to first-strike instabilities, wars, and other terrible games. By reliably upholding voluntary interactions as Schelling Point, and signaling this to Bob, Alice can lessen her and Bob’s impetus to introduce and abuse precedents that could potentially spiral into Hobbesian Traps.
Zooming in on Cooperation
Single Anonymous Interaction
The game of civilization has solved the simple individual exchange problem. But for modern cooperation, i.e. cooperation at scale, we can’t rely on Alice and Bob being neighbors. They may seek simple anonymous trades where neither reputation nor contracts can grip. Imagine Bob gave Anonymous Alice 10 shekels for the promise of a Gourd. She is 10 shekels richer and he is 10 shekels poorer. Anonymous Alice would no longer have a reason to give Bob what he wants.
Upon Bob inquiring where his Gourd is, Anonymous Alice may run off, quoting Hobbes, “For he that performed first has no assurance the other will perform after, because the bonds of words are too weak”. Bob, knowing this, would never give Alice the 10 shekels in the first place. In situations where all players jointly prefer an outcome, but cannot get there by only interacting voluntarily, an obstacle is obstructing their path to Pareto-preferred worlds.
Better Technologies of Cooperation
To cross the obstacle, we need better tools for cooperation. Bob could lock up his 10 shekels in escrow that automatically pays Anonymous Alice when she proves that she has delivered the Gourd. But a tremendous number of remaining problems are basically the same phenomenon writ large. If all Alices and Bobs in this world could find each other, they may be able to build bridges to Pareto-preferred worlds. In reality, collective action dilemmas are often phenomena in which 1000s, or millions of players interact simultaneously. Rather than being simple trades, they might be complex arrangements that unfold over time. Players can’t get to preferred worlds in a way that ensures all are better off at each step along the way. To tackle these problems, we need to innovate technologies of cooperation and democratize their use. We’ll return to this problem.
Zooming in on Intelligence
Civilization as Diagram with 7 Billion Dimensions
When considering Alice and Bob, we need to remember that we’re actually looking at a 7+ billion dimensional diagram. Any particular interaction involves only a bounded set of aspects of the world and only a bounded number of participants. For each one of these interactions, there is a separate diagram, in which we can organize the possible states of the world.
Voluntary Independence
Carol and Dave are in some other part of the world. They have never heard of Alice and Bob, who have never heard of Carol and Dave. Let’s rotate Bob out of the diagram and rotate Carol in. In parallel to Alice’s and Bob’s interaction, Carol is cooperating with Dave to bring about a world state in which Carol is better off. Even though each of these individual transitions hug the edges of the Pareto-box, collectively, they are taking orthogonal steps into the Pareto-preferred area.
In order for the right to choose to cooperate to be meaningful, we need the right to choose not to cooperate. Voluntary independence is actually most of the 7+ billion dimensional diagram. Most people don’t know each other and most of their activities have no strong connection to most other activities in the world. Thanks to the independence of the arrangements that form in a vast experimentation space, some arrangements can go forward when others get stuck. The system continuously selects for arrangements that create productive cooperation such that it is dominated by their beneficial results. While we should improve our arrangements, this process is about becoming better at aligning everyone’s expectations of increasing payoffs by actually increasing payoffs.
The Paretotropism of Civilization
Plants have a phototropism, in that they grow toward the light. Civilization has no goals of its own. But it does have a dynamic, a paretotropism: In the same way that plants grow toward the light, civilization, emerging from our voluntary interactions with each other, progresses towards worlds that are generally better for everyone. [3] Tyler Cowen further compares our modern civilization with a Crusonia plant, a mythical, automatically-growing crop that generates more output each period. The more crops, the more there is to go around for different players to pursue their goals.
By cooperating via voluntary choices, and otherwise leaving each other alone for voluntary independence, our civilization is climbing Pareto-preferred hills: “If pairwise barter amounts to Pareto-hill-climbing across a rough terrain with few available moves; trade in a system with currency and prices amounts to hill-climbing across a smoother terrain with many available moves”.
Chapter Summary
Values across players differ. In the future, they may drift further apart. Voluntarism enables independent pursuit of goals, so it should be attractive from a variety of perspectives. In addition to establishing peaceful co-existence, we also want to amplify cooperation for mutual benefit. Combining voluntarism at the base with an aspiration for increased cooperation results in voluntary cooperation. If we learn how to cooperate not just with humans but also with other intelligences, we can steepen our Paretotropian ascend.
- ^
In the Ultimatum Game, people are presented with a choice of how to divide up a pie. The rules are that one player chooses first, and the second player can take the rest or reject the game, in which case neither player gets anything. If player one takes almost all of it and leaves very little to the second player, the system as a whole still moves to a Pareto-preferred outcome. But people often reject the game at that point and say the distribution is too unfair.
- ^
Envy in such a world will be less coherent. Nevertheless, the green cone in the middle is still the sweet spot for what Alice and Bob can achieve by making deals. This is because Bob can still reason about what seems to be Alice’s preference ranking and hold out for a better deal.
- ^
In Paretotopian Goal Alignment, Eric Drexler takes it one step further, suggesting that if “emerging technologies dramatically increase our productive capacity, and if this translates into more resources at our disposal, this would further accelerate cooperation because divisions over those resources will become ever less attractive in light of the huge gains by cooperation”.
Nicely written!
A nitpick: I believe “Voluntary cooperation” shouldn’t always be equal to “Pareto preferred”. Consider an Ultimatum game, where two people have 10 goodness tokens to split; the first person suggests a split (just once), then the second may accept or reject (when rejecting, all tokens are discarded). 9+1 is Pareto superior to 0+0 but one shouldn’t [100%] accept 9+1 lest that becomes anything they are ever suggested. Summarizable with “Don’t do unto yourself what you wouldn’t want to be done unto you”, or something like that.
Agree. Upholding voluntary cooperation should be our meta strategy whether or not it leads to pareto-preferred outcomes but it’s a very nice additional feature that it often does :)